00:00:00.001 Started by upstream project "autotest-per-patch" build number 126230 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.053 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.054 The recommended git tool is: git 00:00:00.054 using credential 00000000-0000-0000-0000-000000000002 00:00:00.056 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.079 Fetching changes from the remote Git repository 00:00:00.083 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.127 Using shallow fetch with depth 1 00:00:00.127 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.127 > git --version # timeout=10 00:00:00.174 > git --version # 'git version 2.39.2' 00:00:00.174 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.205 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.205 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:03.695 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:03.706 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:03.738 Checking out Revision 7caca6989ac753a10259529aadac5754060382af (FETCH_HEAD) 00:00:03.738 > git config core.sparsecheckout # timeout=10 00:00:03.753 > git read-tree -mu HEAD # timeout=10 00:00:03.771 > git checkout -f 7caca6989ac753a10259529aadac5754060382af # timeout=5 00:00:03.796 Commit message: "jenkins/jjb-config: Purge centos leftovers" 00:00:03.796 > git rev-list --no-walk 7caca6989ac753a10259529aadac5754060382af # timeout=10 00:00:03.881 [Pipeline] Start of Pipeline 00:00:03.899 [Pipeline] library 00:00:03.900 Loading library shm_lib@master 00:00:03.900 Library shm_lib@master is cached. Copying from home. 00:00:03.917 [Pipeline] node 00:00:03.926 Running on CYP12 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:03.928 [Pipeline] { 00:00:03.940 [Pipeline] catchError 00:00:03.941 [Pipeline] { 00:00:03.954 [Pipeline] wrap 00:00:03.964 [Pipeline] { 00:00:03.971 [Pipeline] stage 00:00:03.973 [Pipeline] { (Prologue) 00:00:04.192 [Pipeline] sh 00:00:04.476 + logger -p user.info -t JENKINS-CI 00:00:04.496 [Pipeline] echo 00:00:04.497 Node: CYP12 00:00:04.503 [Pipeline] sh 00:00:04.803 [Pipeline] setCustomBuildProperty 00:00:04.813 [Pipeline] echo 00:00:04.815 Cleanup processes 00:00:04.820 [Pipeline] sh 00:00:05.108 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.108 977824 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.121 [Pipeline] sh 00:00:05.405 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:05.405 ++ grep -v 'sudo pgrep' 00:00:05.405 ++ awk '{print $1}' 00:00:05.405 + sudo kill -9 00:00:05.405 + true 00:00:05.419 [Pipeline] cleanWs 00:00:05.429 [WS-CLEANUP] Deleting project workspace... 00:00:05.429 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.437 [WS-CLEANUP] done 00:00:05.440 [Pipeline] setCustomBuildProperty 00:00:05.450 [Pipeline] sh 00:00:05.732 + sudo git config --global --replace-all safe.directory '*' 00:00:05.806 [Pipeline] httpRequest 00:00:05.831 [Pipeline] echo 00:00:05.832 Sorcerer 10.211.164.101 is alive 00:00:05.837 [Pipeline] httpRequest 00:00:05.841 HttpMethod: GET 00:00:05.841 URL: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.842 Sending request to url: http://10.211.164.101/packages/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:05.845 Response Code: HTTP/1.1 200 OK 00:00:05.845 Success: Status code 200 is in the accepted range: 200,404 00:00:05.845 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:07.758 [Pipeline] sh 00:00:08.045 + tar --no-same-owner -xf jbp_7caca6989ac753a10259529aadac5754060382af.tar.gz 00:00:08.060 [Pipeline] httpRequest 00:00:08.088 [Pipeline] echo 00:00:08.090 Sorcerer 10.211.164.101 is alive 00:00:08.098 [Pipeline] httpRequest 00:00:08.103 HttpMethod: GET 00:00:08.103 URL: http://10.211.164.101/packages/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:08.104 Sending request to url: http://10.211.164.101/packages/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:00:08.126 Response Code: HTTP/1.1 200 OK 00:00:08.127 Success: Status code 200 is in the accepted range: 200,404 00:00:08.127 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:01:00.010 [Pipeline] sh 00:01:00.289 + tar --no-same-owner -xf spdk_6c0846996bb393be04189626d69239816f169775.tar.gz 00:01:02.845 [Pipeline] sh 00:01:03.172 + git -C spdk log --oneline -n5 00:01:03.172 6c0846996 module/bdev/nvme: add detach-monitor poller 00:01:03.172 70e80ba15 lib/nvme: add scan attached 00:01:03.172 455fda465 nvme_pci: ctrlr_scan_attached callback 00:01:03.172 a732bf2a5 nvme_transport: optional callback to scan attached 00:01:03.172 2728651ee accel: adjust task per ch define name 00:01:03.184 [Pipeline] } 00:01:03.209 [Pipeline] // stage 00:01:03.221 [Pipeline] stage 00:01:03.225 [Pipeline] { (Prepare) 00:01:03.249 [Pipeline] writeFile 00:01:03.270 [Pipeline] sh 00:01:03.553 + logger -p user.info -t JENKINS-CI 00:01:03.569 [Pipeline] sh 00:01:03.858 + logger -p user.info -t JENKINS-CI 00:01:03.871 [Pipeline] sh 00:01:04.157 + cat autorun-spdk.conf 00:01:04.157 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.157 SPDK_TEST_NVMF=1 00:01:04.157 SPDK_TEST_NVME_CLI=1 00:01:04.157 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.157 SPDK_TEST_NVMF_NICS=e810 00:01:04.157 SPDK_TEST_VFIOUSER=1 00:01:04.157 SPDK_RUN_UBSAN=1 00:01:04.157 NET_TYPE=phy 00:01:04.164 RUN_NIGHTLY=0 00:01:04.169 [Pipeline] readFile 00:01:04.195 [Pipeline] withEnv 00:01:04.197 [Pipeline] { 00:01:04.212 [Pipeline] sh 00:01:04.499 + set -ex 00:01:04.499 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:04.499 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:04.499 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:04.499 ++ SPDK_TEST_NVMF=1 00:01:04.499 ++ SPDK_TEST_NVME_CLI=1 00:01:04.499 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:04.499 ++ SPDK_TEST_NVMF_NICS=e810 00:01:04.499 ++ SPDK_TEST_VFIOUSER=1 00:01:04.499 ++ SPDK_RUN_UBSAN=1 00:01:04.499 ++ NET_TYPE=phy 00:01:04.499 ++ RUN_NIGHTLY=0 00:01:04.499 + case $SPDK_TEST_NVMF_NICS in 00:01:04.499 + DRIVERS=ice 00:01:04.499 + [[ tcp == \r\d\m\a ]] 00:01:04.499 + [[ -n ice ]] 00:01:04.499 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:04.499 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:12.638 rmmod: ERROR: Module irdma is not currently loaded 00:01:12.638 rmmod: ERROR: Module i40iw is not currently loaded 00:01:12.638 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:12.638 + true 00:01:12.638 + for D in $DRIVERS 00:01:12.638 + sudo modprobe ice 00:01:12.638 + exit 0 00:01:12.649 [Pipeline] } 00:01:12.668 [Pipeline] // withEnv 00:01:12.674 [Pipeline] } 00:01:12.691 [Pipeline] // stage 00:01:12.702 [Pipeline] catchError 00:01:12.704 [Pipeline] { 00:01:12.720 [Pipeline] timeout 00:01:12.720 Timeout set to expire in 50 min 00:01:12.722 [Pipeline] { 00:01:12.738 [Pipeline] stage 00:01:12.741 [Pipeline] { (Tests) 00:01:12.758 [Pipeline] sh 00:01:13.047 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.047 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.047 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.047 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.047 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.047 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.047 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.047 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.047 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.047 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.047 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.047 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.047 + source /etc/os-release 00:01:13.047 ++ NAME='Fedora Linux' 00:01:13.047 ++ VERSION='38 (Cloud Edition)' 00:01:13.047 ++ ID=fedora 00:01:13.047 ++ VERSION_ID=38 00:01:13.047 ++ VERSION_CODENAME= 00:01:13.047 ++ PLATFORM_ID=platform:f38 00:01:13.047 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.047 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.047 ++ LOGO=fedora-logo-icon 00:01:13.047 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.047 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.047 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.047 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.047 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.048 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.048 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.048 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.048 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.048 ++ SUPPORT_END=2024-05-14 00:01:13.048 ++ VARIANT='Cloud Edition' 00:01:13.048 ++ VARIANT_ID=cloud 00:01:13.048 + uname -a 00:01:13.048 Linux spdk-cyp-12 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.048 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:16.350 Hugepages 00:01:16.350 node hugesize free / total 00:01:16.350 node0 1048576kB 0 / 0 00:01:16.350 node0 2048kB 0 / 0 00:01:16.350 node1 1048576kB 0 / 0 00:01:16.350 node1 2048kB 0 / 0 00:01:16.350 00:01:16.350 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:16.350 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:01:16.350 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:01:16.350 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:01:16.350 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:01:16.350 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:01:16.350 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:01:16.350 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:01:16.350 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:01:16.350 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:01:16.350 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:01:16.350 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:01:16.350 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:01:16.350 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:01:16.350 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:01:16.612 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:01:16.612 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:01:16.612 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:01:16.612 + rm -f /tmp/spdk-ld-path 00:01:16.612 + source autorun-spdk.conf 00:01:16.612 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.612 ++ SPDK_TEST_NVMF=1 00:01:16.612 ++ SPDK_TEST_NVME_CLI=1 00:01:16.612 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.612 ++ SPDK_TEST_NVMF_NICS=e810 00:01:16.612 ++ SPDK_TEST_VFIOUSER=1 00:01:16.612 ++ SPDK_RUN_UBSAN=1 00:01:16.612 ++ NET_TYPE=phy 00:01:16.612 ++ RUN_NIGHTLY=0 00:01:16.612 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:16.612 + [[ -n '' ]] 00:01:16.612 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.612 + for M in /var/spdk/build-*-manifest.txt 00:01:16.612 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:16.612 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.612 + for M in /var/spdk/build-*-manifest.txt 00:01:16.612 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:16.612 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:16.612 ++ uname 00:01:16.612 + [[ Linux == \L\i\n\u\x ]] 00:01:16.612 + sudo dmesg -T 00:01:16.612 + sudo dmesg --clear 00:01:16.612 + dmesg_pid=979267 00:01:16.612 + [[ Fedora Linux == FreeBSD ]] 00:01:16.612 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.612 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:16.612 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:16.612 + [[ -x /usr/src/fio-static/fio ]] 00:01:16.612 + export FIO_BIN=/usr/src/fio-static/fio 00:01:16.612 + FIO_BIN=/usr/src/fio-static/fio 00:01:16.612 + sudo dmesg -Tw 00:01:16.612 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:16.612 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:16.612 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:16.612 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.612 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:16.612 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:16.612 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.612 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:16.612 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:16.612 Test configuration: 00:01:16.612 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:16.612 SPDK_TEST_NVMF=1 00:01:16.612 SPDK_TEST_NVME_CLI=1 00:01:16.612 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:16.612 SPDK_TEST_NVMF_NICS=e810 00:01:16.612 SPDK_TEST_VFIOUSER=1 00:01:16.612 SPDK_RUN_UBSAN=1 00:01:16.612 NET_TYPE=phy 00:01:16.612 RUN_NIGHTLY=0 20:15:08 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:16.612 20:15:08 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:16.612 20:15:08 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:16.612 20:15:08 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:16.612 20:15:08 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.612 20:15:08 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.612 20:15:08 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.612 20:15:08 -- paths/export.sh@5 -- $ export PATH 00:01:16.613 20:15:08 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:16.613 20:15:08 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:16.613 20:15:08 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:16.613 20:15:08 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721067308.XXXXXX 00:01:16.613 20:15:08 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721067308.0ymA6O 00:01:16.613 20:15:08 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:16.613 20:15:08 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:01:16.613 20:15:08 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:16.613 20:15:08 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:16.613 20:15:08 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:16.613 20:15:08 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:16.613 20:15:08 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:16.613 20:15:08 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.613 20:15:08 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:16.613 20:15:08 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:16.613 20:15:08 -- pm/common@17 -- $ local monitor 00:01:16.613 20:15:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.874 20:15:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.874 20:15:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.874 20:15:08 -- pm/common@21 -- $ date +%s 00:01:16.874 20:15:08 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:16.874 20:15:08 -- pm/common@25 -- $ sleep 1 00:01:16.874 20:15:08 -- pm/common@21 -- $ date +%s 00:01:16.874 20:15:08 -- pm/common@21 -- $ date +%s 00:01:16.874 20:15:08 -- pm/common@21 -- $ date +%s 00:01:16.874 20:15:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067308 00:01:16.874 20:15:08 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067308 00:01:16.874 20:15:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067309 00:01:16.874 20:15:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721067309 00:01:16.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067308_collect-vmstat.pm.log 00:01:16.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067308_collect-cpu-load.pm.log 00:01:16.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067309_collect-cpu-temp.pm.log 00:01:16.874 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721067309_collect-bmc-pm.bmc.pm.log 00:01:17.816 20:15:09 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:17.816 20:15:09 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:17.816 20:15:09 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:17.816 20:15:09 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:17.816 20:15:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:17.816 Mon Jul 15 06:15:10 PM UTC 2024 00:01:17.816 20:15:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:17.816 v24.09-pre-210-g6c0846996 00:01:17.816 20:15:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:17.816 20:15:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:17.816 20:15:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:17.816 20:15:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:17.816 20:15:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:17.816 20:15:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:17.816 ************************************ 00:01:17.816 START TEST ubsan 00:01:17.816 ************************************ 00:01:17.816 20:15:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:17.816 using ubsan 00:01:17.816 00:01:17.816 real 0m0.001s 00:01:17.816 user 0m0.000s 00:01:17.816 sys 0m0.001s 00:01:17.816 20:15:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:17.816 20:15:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:17.816 ************************************ 00:01:17.816 END TEST ubsan 00:01:17.816 ************************************ 00:01:17.816 20:15:10 -- common/autotest_common.sh@1142 -- $ return 0 00:01:17.816 20:15:10 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:17.816 20:15:10 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:17.816 20:15:10 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:17.816 20:15:10 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:17.816 20:15:10 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:17.816 20:15:10 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:17.816 20:15:10 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:17.816 20:15:10 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:17.816 20:15:10 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:18.076 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:18.076 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:18.337 Using 'verbs' RDMA provider 00:01:34.184 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:46.420 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:46.421 Creating mk/config.mk...done. 00:01:46.421 Creating mk/cc.flags.mk...done. 00:01:46.421 Type 'make' to build. 00:01:46.421 20:15:37 -- spdk/autobuild.sh@69 -- $ run_test make make -j144 00:01:46.421 20:15:37 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:46.421 20:15:37 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:46.421 20:15:37 -- common/autotest_common.sh@10 -- $ set +x 00:01:46.421 ************************************ 00:01:46.421 START TEST make 00:01:46.421 ************************************ 00:01:46.421 20:15:38 make -- common/autotest_common.sh@1123 -- $ make -j144 00:01:46.421 make[1]: Nothing to be done for 'all'. 00:01:47.360 The Meson build system 00:01:47.360 Version: 1.3.1 00:01:47.360 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:47.360 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:47.360 Build type: native build 00:01:47.360 Project name: libvfio-user 00:01:47.360 Project version: 0.0.1 00:01:47.360 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:47.360 C linker for the host machine: cc ld.bfd 2.39-16 00:01:47.360 Host machine cpu family: x86_64 00:01:47.360 Host machine cpu: x86_64 00:01:47.360 Run-time dependency threads found: YES 00:01:47.360 Library dl found: YES 00:01:47.360 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:47.360 Run-time dependency json-c found: YES 0.17 00:01:47.360 Run-time dependency cmocka found: YES 1.1.7 00:01:47.360 Program pytest-3 found: NO 00:01:47.360 Program flake8 found: NO 00:01:47.360 Program misspell-fixer found: NO 00:01:47.360 Program restructuredtext-lint found: NO 00:01:47.360 Program valgrind found: YES (/usr/bin/valgrind) 00:01:47.360 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:47.360 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:47.360 Compiler for C supports arguments -Wwrite-strings: YES 00:01:47.360 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:47.360 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:47.360 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:47.360 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:47.360 Build targets in project: 8 00:01:47.360 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:47.360 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:47.360 00:01:47.360 libvfio-user 0.0.1 00:01:47.360 00:01:47.360 User defined options 00:01:47.360 buildtype : debug 00:01:47.360 default_library: shared 00:01:47.360 libdir : /usr/local/lib 00:01:47.360 00:01:47.360 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:47.623 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:47.888 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:47.888 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:47.888 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:47.888 [4/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:47.888 [5/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:47.888 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:47.888 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:47.888 [8/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:47.888 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:47.888 [10/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:47.888 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:47.888 [12/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:47.888 [13/37] Compiling C object samples/null.p/null.c.o 00:01:47.888 [14/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:47.888 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:47.888 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:47.888 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:47.888 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:47.888 [19/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:47.888 [20/37] Compiling C object samples/server.p/server.c.o 00:01:47.888 [21/37] Compiling C object samples/client.p/client.c.o 00:01:47.888 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:47.888 [23/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:47.888 [24/37] Linking target samples/client 00:01:47.888 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:47.889 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:47.889 [27/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:47.889 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:47.889 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:47.889 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:47.889 [31/37] Linking target test/unit_tests 00:01:48.149 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:48.149 [33/37] Linking target samples/server 00:01:48.149 [34/37] Linking target samples/null 00:01:48.149 [35/37] Linking target samples/shadow_ioeventfd_server 00:01:48.149 [36/37] Linking target samples/gpio-pci-idio-16 00:01:48.149 [37/37] Linking target samples/lspci 00:01:48.149 INFO: autodetecting backend as ninja 00:01:48.149 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:48.149 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:48.411 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:48.411 ninja: no work to do. 00:01:55.001 The Meson build system 00:01:55.001 Version: 1.3.1 00:01:55.001 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:55.001 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:55.001 Build type: native build 00:01:55.001 Program cat found: YES (/usr/bin/cat) 00:01:55.001 Project name: DPDK 00:01:55.001 Project version: 24.03.0 00:01:55.001 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:55.001 C linker for the host machine: cc ld.bfd 2.39-16 00:01:55.001 Host machine cpu family: x86_64 00:01:55.001 Host machine cpu: x86_64 00:01:55.001 Message: ## Building in Developer Mode ## 00:01:55.001 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:55.001 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:55.001 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:55.001 Program python3 found: YES (/usr/bin/python3) 00:01:55.001 Program cat found: YES (/usr/bin/cat) 00:01:55.001 Compiler for C supports arguments -march=native: YES 00:01:55.001 Checking for size of "void *" : 8 00:01:55.001 Checking for size of "void *" : 8 (cached) 00:01:55.001 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:55.001 Library m found: YES 00:01:55.001 Library numa found: YES 00:01:55.001 Has header "numaif.h" : YES 00:01:55.001 Library fdt found: NO 00:01:55.001 Library execinfo found: NO 00:01:55.001 Has header "execinfo.h" : YES 00:01:55.001 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:55.001 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:55.001 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:55.001 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:55.001 Run-time dependency openssl found: YES 3.0.9 00:01:55.001 Run-time dependency libpcap found: YES 1.10.4 00:01:55.001 Has header "pcap.h" with dependency libpcap: YES 00:01:55.001 Compiler for C supports arguments -Wcast-qual: YES 00:01:55.001 Compiler for C supports arguments -Wdeprecated: YES 00:01:55.001 Compiler for C supports arguments -Wformat: YES 00:01:55.001 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:55.001 Compiler for C supports arguments -Wformat-security: NO 00:01:55.001 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:55.001 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:55.001 Compiler for C supports arguments -Wnested-externs: YES 00:01:55.001 Compiler for C supports arguments -Wold-style-definition: YES 00:01:55.001 Compiler for C supports arguments -Wpointer-arith: YES 00:01:55.001 Compiler for C supports arguments -Wsign-compare: YES 00:01:55.001 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:55.001 Compiler for C supports arguments -Wundef: YES 00:01:55.001 Compiler for C supports arguments -Wwrite-strings: YES 00:01:55.001 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:55.001 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:55.001 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:55.001 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:55.001 Program objdump found: YES (/usr/bin/objdump) 00:01:55.001 Compiler for C supports arguments -mavx512f: YES 00:01:55.001 Checking if "AVX512 checking" compiles: YES 00:01:55.001 Fetching value of define "__SSE4_2__" : 1 00:01:55.001 Fetching value of define "__AES__" : 1 00:01:55.001 Fetching value of define "__AVX__" : 1 00:01:55.001 Fetching value of define "__AVX2__" : 1 00:01:55.001 Fetching value of define "__AVX512BW__" : 1 00:01:55.001 Fetching value of define "__AVX512CD__" : 1 00:01:55.001 Fetching value of define "__AVX512DQ__" : 1 00:01:55.002 Fetching value of define "__AVX512F__" : 1 00:01:55.002 Fetching value of define "__AVX512VL__" : 1 00:01:55.002 Fetching value of define "__PCLMUL__" : 1 00:01:55.002 Fetching value of define "__RDRND__" : 1 00:01:55.002 Fetching value of define "__RDSEED__" : 1 00:01:55.002 Fetching value of define "__VPCLMULQDQ__" : 1 00:01:55.002 Fetching value of define "__znver1__" : (undefined) 00:01:55.002 Fetching value of define "__znver2__" : (undefined) 00:01:55.002 Fetching value of define "__znver3__" : (undefined) 00:01:55.002 Fetching value of define "__znver4__" : (undefined) 00:01:55.002 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:55.002 Message: lib/log: Defining dependency "log" 00:01:55.002 Message: lib/kvargs: Defining dependency "kvargs" 00:01:55.002 Message: lib/telemetry: Defining dependency "telemetry" 00:01:55.002 Checking for function "getentropy" : NO 00:01:55.002 Message: lib/eal: Defining dependency "eal" 00:01:55.002 Message: lib/ring: Defining dependency "ring" 00:01:55.002 Message: lib/rcu: Defining dependency "rcu" 00:01:55.002 Message: lib/mempool: Defining dependency "mempool" 00:01:55.002 Message: lib/mbuf: Defining dependency "mbuf" 00:01:55.002 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:55.002 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:55.002 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:55.002 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:55.002 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:55.002 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:01:55.002 Compiler for C supports arguments -mpclmul: YES 00:01:55.002 Compiler for C supports arguments -maes: YES 00:01:55.002 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:55.002 Compiler for C supports arguments -mavx512bw: YES 00:01:55.002 Compiler for C supports arguments -mavx512dq: YES 00:01:55.002 Compiler for C supports arguments -mavx512vl: YES 00:01:55.002 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:55.002 Compiler for C supports arguments -mavx2: YES 00:01:55.002 Compiler for C supports arguments -mavx: YES 00:01:55.002 Message: lib/net: Defining dependency "net" 00:01:55.002 Message: lib/meter: Defining dependency "meter" 00:01:55.002 Message: lib/ethdev: Defining dependency "ethdev" 00:01:55.002 Message: lib/pci: Defining dependency "pci" 00:01:55.002 Message: lib/cmdline: Defining dependency "cmdline" 00:01:55.002 Message: lib/hash: Defining dependency "hash" 00:01:55.002 Message: lib/timer: Defining dependency "timer" 00:01:55.002 Message: lib/compressdev: Defining dependency "compressdev" 00:01:55.002 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:55.002 Message: lib/dmadev: Defining dependency "dmadev" 00:01:55.002 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:55.002 Message: lib/power: Defining dependency "power" 00:01:55.002 Message: lib/reorder: Defining dependency "reorder" 00:01:55.002 Message: lib/security: Defining dependency "security" 00:01:55.002 Has header "linux/userfaultfd.h" : YES 00:01:55.002 Has header "linux/vduse.h" : YES 00:01:55.002 Message: lib/vhost: Defining dependency "vhost" 00:01:55.002 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:55.002 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:55.002 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:55.002 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:55.002 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:55.002 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:55.002 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:55.002 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:55.002 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:55.002 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:55.002 Program doxygen found: YES (/usr/bin/doxygen) 00:01:55.002 Configuring doxy-api-html.conf using configuration 00:01:55.002 Configuring doxy-api-man.conf using configuration 00:01:55.002 Program mandb found: YES (/usr/bin/mandb) 00:01:55.002 Program sphinx-build found: NO 00:01:55.002 Configuring rte_build_config.h using configuration 00:01:55.002 Message: 00:01:55.002 ================= 00:01:55.002 Applications Enabled 00:01:55.002 ================= 00:01:55.002 00:01:55.002 apps: 00:01:55.002 00:01:55.002 00:01:55.002 Message: 00:01:55.002 ================= 00:01:55.002 Libraries Enabled 00:01:55.002 ================= 00:01:55.002 00:01:55.002 libs: 00:01:55.002 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:55.002 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:55.002 cryptodev, dmadev, power, reorder, security, vhost, 00:01:55.002 00:01:55.002 Message: 00:01:55.002 =============== 00:01:55.002 Drivers Enabled 00:01:55.002 =============== 00:01:55.002 00:01:55.002 common: 00:01:55.002 00:01:55.002 bus: 00:01:55.002 pci, vdev, 00:01:55.002 mempool: 00:01:55.002 ring, 00:01:55.002 dma: 00:01:55.002 00:01:55.002 net: 00:01:55.002 00:01:55.002 crypto: 00:01:55.002 00:01:55.002 compress: 00:01:55.002 00:01:55.002 vdpa: 00:01:55.002 00:01:55.002 00:01:55.002 Message: 00:01:55.002 ================= 00:01:55.002 Content Skipped 00:01:55.002 ================= 00:01:55.002 00:01:55.002 apps: 00:01:55.002 dumpcap: explicitly disabled via build config 00:01:55.002 graph: explicitly disabled via build config 00:01:55.002 pdump: explicitly disabled via build config 00:01:55.002 proc-info: explicitly disabled via build config 00:01:55.002 test-acl: explicitly disabled via build config 00:01:55.002 test-bbdev: explicitly disabled via build config 00:01:55.002 test-cmdline: explicitly disabled via build config 00:01:55.002 test-compress-perf: explicitly disabled via build config 00:01:55.002 test-crypto-perf: explicitly disabled via build config 00:01:55.002 test-dma-perf: explicitly disabled via build config 00:01:55.002 test-eventdev: explicitly disabled via build config 00:01:55.002 test-fib: explicitly disabled via build config 00:01:55.002 test-flow-perf: explicitly disabled via build config 00:01:55.002 test-gpudev: explicitly disabled via build config 00:01:55.002 test-mldev: explicitly disabled via build config 00:01:55.002 test-pipeline: explicitly disabled via build config 00:01:55.002 test-pmd: explicitly disabled via build config 00:01:55.002 test-regex: explicitly disabled via build config 00:01:55.002 test-sad: explicitly disabled via build config 00:01:55.002 test-security-perf: explicitly disabled via build config 00:01:55.002 00:01:55.002 libs: 00:01:55.002 argparse: explicitly disabled via build config 00:01:55.002 metrics: explicitly disabled via build config 00:01:55.002 acl: explicitly disabled via build config 00:01:55.002 bbdev: explicitly disabled via build config 00:01:55.002 bitratestats: explicitly disabled via build config 00:01:55.002 bpf: explicitly disabled via build config 00:01:55.002 cfgfile: explicitly disabled via build config 00:01:55.002 distributor: explicitly disabled via build config 00:01:55.002 efd: explicitly disabled via build config 00:01:55.002 eventdev: explicitly disabled via build config 00:01:55.002 dispatcher: explicitly disabled via build config 00:01:55.002 gpudev: explicitly disabled via build config 00:01:55.002 gro: explicitly disabled via build config 00:01:55.002 gso: explicitly disabled via build config 00:01:55.002 ip_frag: explicitly disabled via build config 00:01:55.002 jobstats: explicitly disabled via build config 00:01:55.002 latencystats: explicitly disabled via build config 00:01:55.002 lpm: explicitly disabled via build config 00:01:55.002 member: explicitly disabled via build config 00:01:55.002 pcapng: explicitly disabled via build config 00:01:55.002 rawdev: explicitly disabled via build config 00:01:55.002 regexdev: explicitly disabled via build config 00:01:55.002 mldev: explicitly disabled via build config 00:01:55.002 rib: explicitly disabled via build config 00:01:55.002 sched: explicitly disabled via build config 00:01:55.002 stack: explicitly disabled via build config 00:01:55.002 ipsec: explicitly disabled via build config 00:01:55.002 pdcp: explicitly disabled via build config 00:01:55.002 fib: explicitly disabled via build config 00:01:55.002 port: explicitly disabled via build config 00:01:55.002 pdump: explicitly disabled via build config 00:01:55.002 table: explicitly disabled via build config 00:01:55.002 pipeline: explicitly disabled via build config 00:01:55.002 graph: explicitly disabled via build config 00:01:55.002 node: explicitly disabled via build config 00:01:55.002 00:01:55.002 drivers: 00:01:55.002 common/cpt: not in enabled drivers build config 00:01:55.002 common/dpaax: not in enabled drivers build config 00:01:55.002 common/iavf: not in enabled drivers build config 00:01:55.002 common/idpf: not in enabled drivers build config 00:01:55.002 common/ionic: not in enabled drivers build config 00:01:55.002 common/mvep: not in enabled drivers build config 00:01:55.002 common/octeontx: not in enabled drivers build config 00:01:55.002 bus/auxiliary: not in enabled drivers build config 00:01:55.002 bus/cdx: not in enabled drivers build config 00:01:55.002 bus/dpaa: not in enabled drivers build config 00:01:55.002 bus/fslmc: not in enabled drivers build config 00:01:55.002 bus/ifpga: not in enabled drivers build config 00:01:55.002 bus/platform: not in enabled drivers build config 00:01:55.002 bus/uacce: not in enabled drivers build config 00:01:55.002 bus/vmbus: not in enabled drivers build config 00:01:55.002 common/cnxk: not in enabled drivers build config 00:01:55.002 common/mlx5: not in enabled drivers build config 00:01:55.002 common/nfp: not in enabled drivers build config 00:01:55.002 common/nitrox: not in enabled drivers build config 00:01:55.002 common/qat: not in enabled drivers build config 00:01:55.002 common/sfc_efx: not in enabled drivers build config 00:01:55.002 mempool/bucket: not in enabled drivers build config 00:01:55.002 mempool/cnxk: not in enabled drivers build config 00:01:55.002 mempool/dpaa: not in enabled drivers build config 00:01:55.002 mempool/dpaa2: not in enabled drivers build config 00:01:55.002 mempool/octeontx: not in enabled drivers build config 00:01:55.002 mempool/stack: not in enabled drivers build config 00:01:55.002 dma/cnxk: not in enabled drivers build config 00:01:55.002 dma/dpaa: not in enabled drivers build config 00:01:55.002 dma/dpaa2: not in enabled drivers build config 00:01:55.002 dma/hisilicon: not in enabled drivers build config 00:01:55.002 dma/idxd: not in enabled drivers build config 00:01:55.002 dma/ioat: not in enabled drivers build config 00:01:55.002 dma/skeleton: not in enabled drivers build config 00:01:55.002 net/af_packet: not in enabled drivers build config 00:01:55.002 net/af_xdp: not in enabled drivers build config 00:01:55.002 net/ark: not in enabled drivers build config 00:01:55.002 net/atlantic: not in enabled drivers build config 00:01:55.002 net/avp: not in enabled drivers build config 00:01:55.003 net/axgbe: not in enabled drivers build config 00:01:55.003 net/bnx2x: not in enabled drivers build config 00:01:55.003 net/bnxt: not in enabled drivers build config 00:01:55.003 net/bonding: not in enabled drivers build config 00:01:55.003 net/cnxk: not in enabled drivers build config 00:01:55.003 net/cpfl: not in enabled drivers build config 00:01:55.003 net/cxgbe: not in enabled drivers build config 00:01:55.003 net/dpaa: not in enabled drivers build config 00:01:55.003 net/dpaa2: not in enabled drivers build config 00:01:55.003 net/e1000: not in enabled drivers build config 00:01:55.003 net/ena: not in enabled drivers build config 00:01:55.003 net/enetc: not in enabled drivers build config 00:01:55.003 net/enetfec: not in enabled drivers build config 00:01:55.003 net/enic: not in enabled drivers build config 00:01:55.003 net/failsafe: not in enabled drivers build config 00:01:55.003 net/fm10k: not in enabled drivers build config 00:01:55.003 net/gve: not in enabled drivers build config 00:01:55.003 net/hinic: not in enabled drivers build config 00:01:55.003 net/hns3: not in enabled drivers build config 00:01:55.003 net/i40e: not in enabled drivers build config 00:01:55.003 net/iavf: not in enabled drivers build config 00:01:55.003 net/ice: not in enabled drivers build config 00:01:55.003 net/idpf: not in enabled drivers build config 00:01:55.003 net/igc: not in enabled drivers build config 00:01:55.003 net/ionic: not in enabled drivers build config 00:01:55.003 net/ipn3ke: not in enabled drivers build config 00:01:55.003 net/ixgbe: not in enabled drivers build config 00:01:55.003 net/mana: not in enabled drivers build config 00:01:55.003 net/memif: not in enabled drivers build config 00:01:55.003 net/mlx4: not in enabled drivers build config 00:01:55.003 net/mlx5: not in enabled drivers build config 00:01:55.003 net/mvneta: not in enabled drivers build config 00:01:55.003 net/mvpp2: not in enabled drivers build config 00:01:55.003 net/netvsc: not in enabled drivers build config 00:01:55.003 net/nfb: not in enabled drivers build config 00:01:55.003 net/nfp: not in enabled drivers build config 00:01:55.003 net/ngbe: not in enabled drivers build config 00:01:55.003 net/null: not in enabled drivers build config 00:01:55.003 net/octeontx: not in enabled drivers build config 00:01:55.003 net/octeon_ep: not in enabled drivers build config 00:01:55.003 net/pcap: not in enabled drivers build config 00:01:55.003 net/pfe: not in enabled drivers build config 00:01:55.003 net/qede: not in enabled drivers build config 00:01:55.003 net/ring: not in enabled drivers build config 00:01:55.003 net/sfc: not in enabled drivers build config 00:01:55.003 net/softnic: not in enabled drivers build config 00:01:55.003 net/tap: not in enabled drivers build config 00:01:55.003 net/thunderx: not in enabled drivers build config 00:01:55.003 net/txgbe: not in enabled drivers build config 00:01:55.003 net/vdev_netvsc: not in enabled drivers build config 00:01:55.003 net/vhost: not in enabled drivers build config 00:01:55.003 net/virtio: not in enabled drivers build config 00:01:55.003 net/vmxnet3: not in enabled drivers build config 00:01:55.003 raw/*: missing internal dependency, "rawdev" 00:01:55.003 crypto/armv8: not in enabled drivers build config 00:01:55.003 crypto/bcmfs: not in enabled drivers build config 00:01:55.003 crypto/caam_jr: not in enabled drivers build config 00:01:55.003 crypto/ccp: not in enabled drivers build config 00:01:55.003 crypto/cnxk: not in enabled drivers build config 00:01:55.003 crypto/dpaa_sec: not in enabled drivers build config 00:01:55.003 crypto/dpaa2_sec: not in enabled drivers build config 00:01:55.003 crypto/ipsec_mb: not in enabled drivers build config 00:01:55.003 crypto/mlx5: not in enabled drivers build config 00:01:55.003 crypto/mvsam: not in enabled drivers build config 00:01:55.003 crypto/nitrox: not in enabled drivers build config 00:01:55.003 crypto/null: not in enabled drivers build config 00:01:55.003 crypto/octeontx: not in enabled drivers build config 00:01:55.003 crypto/openssl: not in enabled drivers build config 00:01:55.003 crypto/scheduler: not in enabled drivers build config 00:01:55.003 crypto/uadk: not in enabled drivers build config 00:01:55.003 crypto/virtio: not in enabled drivers build config 00:01:55.003 compress/isal: not in enabled drivers build config 00:01:55.003 compress/mlx5: not in enabled drivers build config 00:01:55.003 compress/nitrox: not in enabled drivers build config 00:01:55.003 compress/octeontx: not in enabled drivers build config 00:01:55.003 compress/zlib: not in enabled drivers build config 00:01:55.003 regex/*: missing internal dependency, "regexdev" 00:01:55.003 ml/*: missing internal dependency, "mldev" 00:01:55.003 vdpa/ifc: not in enabled drivers build config 00:01:55.003 vdpa/mlx5: not in enabled drivers build config 00:01:55.003 vdpa/nfp: not in enabled drivers build config 00:01:55.003 vdpa/sfc: not in enabled drivers build config 00:01:55.003 event/*: missing internal dependency, "eventdev" 00:01:55.003 baseband/*: missing internal dependency, "bbdev" 00:01:55.003 gpu/*: missing internal dependency, "gpudev" 00:01:55.003 00:01:55.003 00:01:55.003 Build targets in project: 84 00:01:55.003 00:01:55.003 DPDK 24.03.0 00:01:55.003 00:01:55.003 User defined options 00:01:55.003 buildtype : debug 00:01:55.003 default_library : shared 00:01:55.003 libdir : lib 00:01:55.003 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:55.003 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:55.003 c_link_args : 00:01:55.003 cpu_instruction_set: native 00:01:55.003 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:55.003 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:55.003 enable_docs : false 00:01:55.003 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:55.003 enable_kmods : false 00:01:55.003 max_lcores : 128 00:01:55.003 tests : false 00:01:55.003 00:01:55.003 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:55.003 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:55.003 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:55.003 [2/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:55.003 [3/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:55.003 [4/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:55.003 [5/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:55.003 [6/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:55.003 [7/267] Linking static target lib/librte_kvargs.a 00:01:55.003 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:55.003 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:55.003 [10/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:55.003 [11/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:55.003 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:55.003 [13/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:55.003 [14/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:55.003 [15/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:55.003 [16/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:55.003 [17/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:55.003 [18/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:55.003 [19/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:55.003 [20/267] Linking static target lib/librte_log.a 00:01:55.003 [21/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:55.003 [22/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:55.003 [23/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:55.263 [24/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:55.263 [25/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:55.263 [26/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:55.263 [27/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:55.263 [28/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:55.263 [29/267] Linking static target lib/librte_pci.a 00:01:55.263 [30/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:55.263 [31/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:55.263 [32/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:55.263 [33/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:55.263 [34/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:55.263 [35/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:55.263 [36/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:55.263 [37/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:55.263 [38/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:55.263 [39/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:55.263 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:55.523 [41/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:55.523 [42/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.523 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:55.523 [44/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:55.523 [45/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.523 [46/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:55.523 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:55.523 [48/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:55.523 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:55.523 [50/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:55.523 [51/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:55.523 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:55.523 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:55.523 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:55.523 [55/267] Linking static target lib/librte_telemetry.a 00:01:55.523 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:55.523 [57/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:55.523 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:55.523 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:55.523 [60/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:55.523 [61/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:55.523 [62/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:55.523 [63/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:55.523 [64/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:55.523 [65/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:55.523 [66/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:55.523 [67/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:55.523 [68/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:55.523 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:55.523 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:55.523 [71/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:55.523 [72/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:55.523 [73/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:01:55.523 [74/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:55.523 [75/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:55.523 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:55.523 [77/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:55.523 [78/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:55.523 [79/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:55.523 [80/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:55.523 [81/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:55.523 [82/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:55.523 [83/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:55.523 [84/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:55.523 [85/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:55.523 [86/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:55.523 [87/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:55.523 [88/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:55.523 [89/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:55.523 [90/267] Linking static target lib/librte_meter.a 00:01:55.523 [91/267] Linking static target lib/librte_timer.a 00:01:55.524 [92/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:55.524 [93/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:55.524 [94/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:55.524 [95/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:55.524 [96/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:55.524 [97/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:55.524 [98/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:55.524 [99/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:55.524 [100/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:55.524 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:55.524 [102/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:55.524 [103/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:55.524 [104/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:55.524 [105/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:55.524 [106/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:55.524 [107/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:55.524 [108/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:55.524 [109/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:55.524 [110/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:55.524 [111/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:55.524 [112/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:55.524 [113/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:55.524 [114/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:55.524 [115/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:55.524 [116/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:55.524 [117/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:55.524 [118/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:55.524 [119/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:55.524 [120/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:55.524 [121/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:55.524 [122/267] Linking static target lib/librte_reorder.a 00:01:55.524 [123/267] Linking static target lib/librte_mempool.a 00:01:55.524 [124/267] Linking static target lib/librte_cmdline.a 00:01:55.524 [125/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:55.524 [126/267] Linking static target lib/librte_ring.a 00:01:55.524 [127/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:55.524 [128/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:55.524 [129/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:55.524 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:55.524 [131/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:55.524 [132/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:55.524 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:55.524 [134/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:55.524 [135/267] Linking static target lib/librte_net.a 00:01:55.524 [136/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:55.785 [137/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:55.785 [138/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:55.785 [139/267] Linking static target lib/librte_power.a 00:01:55.785 [140/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:55.785 [141/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:55.785 [142/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:55.785 [143/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:55.785 [144/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:55.785 [145/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:55.785 [146/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:55.785 [147/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:55.785 [148/267] Linking static target lib/librte_dmadev.a 00:01:55.785 [149/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:55.785 [150/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:55.785 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:55.785 [152/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:55.785 [153/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:55.785 [154/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:55.785 [155/267] Linking static target lib/librte_compressdev.a 00:01:55.785 [156/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:55.785 [157/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:55.785 [158/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.785 [159/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:55.785 [160/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:55.785 [161/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:55.785 [162/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:55.785 [163/267] Linking static target lib/librte_rcu.a 00:01:55.785 [164/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:55.785 [165/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:55.785 [166/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:55.785 [167/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:55.785 [168/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:55.785 [169/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:55.785 [170/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:55.785 [171/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:55.785 [172/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:55.785 [173/267] Linking target lib/librte_log.so.24.1 00:01:55.785 [174/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:55.785 [175/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:55.785 [176/267] Linking static target lib/librte_eal.a 00:01:55.785 [177/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:55.785 [178/267] Linking static target lib/librte_security.a 00:01:55.785 [179/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:55.785 [180/267] Linking static target lib/librte_mbuf.a 00:01:55.785 [181/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:55.785 [182/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:55.785 [183/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:55.785 [184/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.785 [185/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:55.785 [186/267] Linking static target drivers/librte_bus_vdev.a 00:01:55.785 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:55.785 [188/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:55.785 [189/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:56.044 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:56.044 [191/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.044 [192/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:56.044 [193/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:56.044 [194/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:56.044 [195/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:56.044 [196/267] Linking static target drivers/librte_mempool_ring.a 00:01:56.044 [197/267] Linking static target lib/librte_hash.a 00:01:56.044 [198/267] Linking target lib/librte_kvargs.so.24.1 00:01:56.044 [199/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.044 [200/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:56.044 [201/267] Linking static target drivers/librte_bus_pci.a 00:01:56.044 [202/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.044 [203/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.044 [204/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.044 [205/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.044 [206/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.044 [207/267] Linking target lib/librte_telemetry.so.24.1 00:01:56.044 [208/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:56.044 [209/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.044 [210/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:56.044 [211/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:56.044 [212/267] Linking static target lib/librte_cryptodev.a 00:01:56.304 [213/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:56.304 [214/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.304 [215/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.564 [216/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.564 [217/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.564 [218/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:56.564 [219/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.564 [220/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:56.564 [221/267] Linking static target lib/librte_ethdev.a 00:01:56.564 [222/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.564 [223/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.823 [224/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.823 [225/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.083 [226/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.343 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:57.343 [228/267] Linking static target lib/librte_vhost.a 00:01:58.283 [229/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.667 [230/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.392 [231/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.334 [232/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:07.595 [233/267] Linking target lib/librte_eal.so.24.1 00:02:07.595 [234/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:07.595 [235/267] Linking target lib/librte_pci.so.24.1 00:02:07.595 [236/267] Linking target lib/librte_ring.so.24.1 00:02:07.595 [237/267] Linking target lib/librte_dmadev.so.24.1 00:02:07.595 [238/267] Linking target lib/librte_meter.so.24.1 00:02:07.595 [239/267] Linking target lib/librte_timer.so.24.1 00:02:07.595 [240/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:07.856 [241/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:07.856 [242/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:07.856 [243/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:07.856 [244/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:07.856 [245/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:07.856 [246/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:07.856 [247/267] Linking target lib/librte_rcu.so.24.1 00:02:07.856 [248/267] Linking target lib/librte_mempool.so.24.1 00:02:08.118 [249/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:08.118 [250/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:08.118 [251/267] Linking target lib/librte_mbuf.so.24.1 00:02:08.118 [252/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:08.118 [253/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:08.378 [254/267] Linking target lib/librte_compressdev.so.24.1 00:02:08.378 [255/267] Linking target lib/librte_reorder.so.24.1 00:02:08.378 [256/267] Linking target lib/librte_net.so.24.1 00:02:08.378 [257/267] Linking target lib/librte_cryptodev.so.24.1 00:02:08.378 [258/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:08.378 [259/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:08.378 [260/267] Linking target lib/librte_hash.so.24.1 00:02:08.378 [261/267] Linking target lib/librte_cmdline.so.24.1 00:02:08.378 [262/267] Linking target lib/librte_ethdev.so.24.1 00:02:08.378 [263/267] Linking target lib/librte_security.so.24.1 00:02:08.640 [264/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:08.640 [265/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:08.640 [266/267] Linking target lib/librte_power.so.24.1 00:02:08.640 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:08.640 INFO: autodetecting backend as ninja 00:02:08.640 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 144 00:02:10.027 CC lib/log/log.o 00:02:10.027 CC lib/log/log_flags.o 00:02:10.027 CC lib/log/log_deprecated.o 00:02:10.027 CC lib/ut/ut.o 00:02:10.027 CC lib/ut_mock/mock.o 00:02:10.027 LIB libspdk_log.a 00:02:10.027 LIB libspdk_ut.a 00:02:10.027 LIB libspdk_ut_mock.a 00:02:10.027 SO libspdk_log.so.7.0 00:02:10.027 SO libspdk_ut.so.2.0 00:02:10.027 SO libspdk_ut_mock.so.6.0 00:02:10.027 SYMLINK libspdk_ut.so 00:02:10.027 SYMLINK libspdk_log.so 00:02:10.027 SYMLINK libspdk_ut_mock.so 00:02:10.600 CC lib/util/base64.o 00:02:10.600 CC lib/util/bit_array.o 00:02:10.600 CC lib/util/cpuset.o 00:02:10.600 CC lib/util/crc16.o 00:02:10.600 CC lib/util/crc32.o 00:02:10.600 CC lib/util/crc32c.o 00:02:10.600 CC lib/util/crc32_ieee.o 00:02:10.600 CC lib/util/crc64.o 00:02:10.600 CXX lib/trace_parser/trace.o 00:02:10.600 CC lib/util/fd.o 00:02:10.600 CC lib/util/dif.o 00:02:10.600 CC lib/util/file.o 00:02:10.600 CC lib/util/hexlify.o 00:02:10.600 CC lib/util/math.o 00:02:10.600 CC lib/util/iov.o 00:02:10.600 CC lib/util/pipe.o 00:02:10.600 CC lib/util/strerror_tls.o 00:02:10.600 CC lib/util/string.o 00:02:10.600 CC lib/util/uuid.o 00:02:10.600 CC lib/util/fd_group.o 00:02:10.600 CC lib/util/xor.o 00:02:10.600 CC lib/util/zipf.o 00:02:10.600 CC lib/dma/dma.o 00:02:10.600 CC lib/ioat/ioat.o 00:02:10.600 CC lib/vfio_user/host/vfio_user.o 00:02:10.600 CC lib/vfio_user/host/vfio_user_pci.o 00:02:10.862 LIB libspdk_dma.a 00:02:10.862 SO libspdk_dma.so.4.0 00:02:10.862 LIB libspdk_ioat.a 00:02:10.862 SYMLINK libspdk_dma.so 00:02:10.862 SO libspdk_ioat.so.7.0 00:02:10.862 SYMLINK libspdk_ioat.so 00:02:10.862 LIB libspdk_vfio_user.a 00:02:10.862 SO libspdk_vfio_user.so.5.0 00:02:11.121 LIB libspdk_util.a 00:02:11.121 SYMLINK libspdk_vfio_user.so 00:02:11.121 SO libspdk_util.so.9.1 00:02:11.121 SYMLINK libspdk_util.so 00:02:11.383 LIB libspdk_trace_parser.a 00:02:11.383 SO libspdk_trace_parser.so.5.0 00:02:11.383 SYMLINK libspdk_trace_parser.so 00:02:11.644 CC lib/vmd/vmd.o 00:02:11.644 CC lib/rdma_provider/common.o 00:02:11.644 CC lib/vmd/led.o 00:02:11.644 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:11.644 CC lib/json/json_parse.o 00:02:11.644 CC lib/json/json_util.o 00:02:11.644 CC lib/idxd/idxd.o 00:02:11.644 CC lib/json/json_write.o 00:02:11.644 CC lib/idxd/idxd_user.o 00:02:11.644 CC lib/idxd/idxd_kernel.o 00:02:11.644 CC lib/conf/conf.o 00:02:11.644 CC lib/rdma_utils/rdma_utils.o 00:02:11.644 CC lib/env_dpdk/env.o 00:02:11.644 CC lib/env_dpdk/memory.o 00:02:11.644 CC lib/env_dpdk/pci.o 00:02:11.644 CC lib/env_dpdk/init.o 00:02:11.644 CC lib/env_dpdk/threads.o 00:02:11.644 CC lib/env_dpdk/pci_ioat.o 00:02:11.644 CC lib/env_dpdk/pci_virtio.o 00:02:11.644 CC lib/env_dpdk/pci_vmd.o 00:02:11.644 CC lib/env_dpdk/pci_idxd.o 00:02:11.644 CC lib/env_dpdk/pci_event.o 00:02:11.644 CC lib/env_dpdk/sigbus_handler.o 00:02:11.644 CC lib/env_dpdk/pci_dpdk.o 00:02:11.644 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:11.644 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:11.906 LIB libspdk_rdma_provider.a 00:02:11.906 LIB libspdk_conf.a 00:02:11.906 SO libspdk_rdma_provider.so.6.0 00:02:11.906 LIB libspdk_rdma_utils.a 00:02:11.906 SO libspdk_conf.so.6.0 00:02:11.906 LIB libspdk_json.a 00:02:11.906 SO libspdk_rdma_utils.so.1.0 00:02:11.906 SYMLINK libspdk_rdma_provider.so 00:02:11.906 SO libspdk_json.so.6.0 00:02:11.906 SYMLINK libspdk_conf.so 00:02:11.906 SYMLINK libspdk_rdma_utils.so 00:02:11.906 SYMLINK libspdk_json.so 00:02:12.167 LIB libspdk_idxd.a 00:02:12.167 SO libspdk_idxd.so.12.0 00:02:12.167 LIB libspdk_vmd.a 00:02:12.167 SO libspdk_vmd.so.6.0 00:02:12.167 SYMLINK libspdk_idxd.so 00:02:12.167 SYMLINK libspdk_vmd.so 00:02:12.429 CC lib/jsonrpc/jsonrpc_server.o 00:02:12.429 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:12.429 CC lib/jsonrpc/jsonrpc_client.o 00:02:12.429 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:12.690 LIB libspdk_jsonrpc.a 00:02:12.690 SO libspdk_jsonrpc.so.6.0 00:02:12.690 SYMLINK libspdk_jsonrpc.so 00:02:12.690 LIB libspdk_env_dpdk.a 00:02:12.951 SO libspdk_env_dpdk.so.14.1 00:02:12.951 SYMLINK libspdk_env_dpdk.so 00:02:12.951 CC lib/rpc/rpc.o 00:02:13.212 LIB libspdk_rpc.a 00:02:13.212 SO libspdk_rpc.so.6.0 00:02:13.473 SYMLINK libspdk_rpc.so 00:02:13.734 CC lib/keyring/keyring.o 00:02:13.734 CC lib/keyring/keyring_rpc.o 00:02:13.734 CC lib/notify/notify.o 00:02:13.734 CC lib/notify/notify_rpc.o 00:02:13.734 CC lib/trace/trace.o 00:02:13.734 CC lib/trace/trace_flags.o 00:02:13.734 CC lib/trace/trace_rpc.o 00:02:13.995 LIB libspdk_notify.a 00:02:13.995 SO libspdk_notify.so.6.0 00:02:13.995 LIB libspdk_keyring.a 00:02:13.995 LIB libspdk_trace.a 00:02:13.995 SO libspdk_keyring.so.1.0 00:02:13.995 SYMLINK libspdk_notify.so 00:02:13.995 SO libspdk_trace.so.10.0 00:02:13.995 SYMLINK libspdk_keyring.so 00:02:13.995 SYMLINK libspdk_trace.so 00:02:14.568 CC lib/thread/thread.o 00:02:14.568 CC lib/thread/iobuf.o 00:02:14.568 CC lib/sock/sock.o 00:02:14.568 CC lib/sock/sock_rpc.o 00:02:14.829 LIB libspdk_sock.a 00:02:14.829 SO libspdk_sock.so.10.0 00:02:14.829 SYMLINK libspdk_sock.so 00:02:15.400 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:15.400 CC lib/nvme/nvme_ctrlr.o 00:02:15.400 CC lib/nvme/nvme_fabric.o 00:02:15.400 CC lib/nvme/nvme_ns_cmd.o 00:02:15.400 CC lib/nvme/nvme_ns.o 00:02:15.400 CC lib/nvme/nvme_qpair.o 00:02:15.400 CC lib/nvme/nvme_pcie_common.o 00:02:15.400 CC lib/nvme/nvme_pcie.o 00:02:15.400 CC lib/nvme/nvme.o 00:02:15.400 CC lib/nvme/nvme_quirks.o 00:02:15.400 CC lib/nvme/nvme_transport.o 00:02:15.400 CC lib/nvme/nvme_discovery.o 00:02:15.400 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:15.400 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:15.400 CC lib/nvme/nvme_tcp.o 00:02:15.400 CC lib/nvme/nvme_opal.o 00:02:15.400 CC lib/nvme/nvme_io_msg.o 00:02:15.400 CC lib/nvme/nvme_poll_group.o 00:02:15.400 CC lib/nvme/nvme_zns.o 00:02:15.400 CC lib/nvme/nvme_stubs.o 00:02:15.400 CC lib/nvme/nvme_auth.o 00:02:15.400 CC lib/nvme/nvme_cuse.o 00:02:15.400 CC lib/nvme/nvme_vfio_user.o 00:02:15.400 CC lib/nvme/nvme_rdma.o 00:02:15.659 LIB libspdk_thread.a 00:02:15.659 SO libspdk_thread.so.10.1 00:02:15.920 SYMLINK libspdk_thread.so 00:02:16.180 CC lib/blob/blobstore.o 00:02:16.180 CC lib/blob/zeroes.o 00:02:16.180 CC lib/blob/request.o 00:02:16.180 CC lib/blob/blob_bs_dev.o 00:02:16.180 CC lib/init/subsystem.o 00:02:16.180 CC lib/init/json_config.o 00:02:16.180 CC lib/init/subsystem_rpc.o 00:02:16.180 CC lib/init/rpc.o 00:02:16.180 CC lib/virtio/virtio.o 00:02:16.180 CC lib/virtio/virtio_vhost_user.o 00:02:16.180 CC lib/virtio/virtio_vfio_user.o 00:02:16.180 CC lib/vfu_tgt/tgt_endpoint.o 00:02:16.180 CC lib/virtio/virtio_pci.o 00:02:16.180 CC lib/vfu_tgt/tgt_rpc.o 00:02:16.180 CC lib/accel/accel_rpc.o 00:02:16.180 CC lib/accel/accel.o 00:02:16.180 CC lib/accel/accel_sw.o 00:02:16.442 LIB libspdk_init.a 00:02:16.442 SO libspdk_init.so.5.0 00:02:16.442 LIB libspdk_vfu_tgt.a 00:02:16.442 LIB libspdk_virtio.a 00:02:16.442 SO libspdk_vfu_tgt.so.3.0 00:02:16.442 SYMLINK libspdk_init.so 00:02:16.442 SO libspdk_virtio.so.7.0 00:02:16.442 SYMLINK libspdk_vfu_tgt.so 00:02:16.703 SYMLINK libspdk_virtio.so 00:02:16.964 CC lib/event/app.o 00:02:16.964 CC lib/event/reactor.o 00:02:16.964 CC lib/event/log_rpc.o 00:02:16.964 CC lib/event/app_rpc.o 00:02:16.964 CC lib/event/scheduler_static.o 00:02:16.964 LIB libspdk_accel.a 00:02:17.224 SO libspdk_accel.so.15.1 00:02:17.224 LIB libspdk_nvme.a 00:02:17.224 SYMLINK libspdk_accel.so 00:02:17.224 LIB libspdk_event.a 00:02:17.224 SO libspdk_nvme.so.13.1 00:02:17.224 SO libspdk_event.so.14.0 00:02:17.486 SYMLINK libspdk_event.so 00:02:17.486 CC lib/bdev/bdev.o 00:02:17.486 CC lib/bdev/part.o 00:02:17.486 CC lib/bdev/bdev_rpc.o 00:02:17.486 CC lib/bdev/bdev_zone.o 00:02:17.486 CC lib/bdev/scsi_nvme.o 00:02:17.486 SYMLINK libspdk_nvme.so 00:02:18.871 LIB libspdk_blob.a 00:02:18.871 SO libspdk_blob.so.11.0 00:02:18.871 SYMLINK libspdk_blob.so 00:02:19.133 CC lib/lvol/lvol.o 00:02:19.133 CC lib/blobfs/blobfs.o 00:02:19.133 CC lib/blobfs/tree.o 00:02:19.706 LIB libspdk_bdev.a 00:02:19.706 SO libspdk_bdev.so.15.1 00:02:19.706 SYMLINK libspdk_bdev.so 00:02:19.968 LIB libspdk_blobfs.a 00:02:19.968 SO libspdk_blobfs.so.10.0 00:02:19.968 LIB libspdk_lvol.a 00:02:19.968 SYMLINK libspdk_blobfs.so 00:02:19.968 SO libspdk_lvol.so.10.0 00:02:19.968 SYMLINK libspdk_lvol.so 00:02:20.227 CC lib/ublk/ublk.o 00:02:20.227 CC lib/ublk/ublk_rpc.o 00:02:20.227 CC lib/scsi/dev.o 00:02:20.227 CC lib/scsi/lun.o 00:02:20.227 CC lib/scsi/port.o 00:02:20.227 CC lib/scsi/scsi.o 00:02:20.227 CC lib/scsi/scsi_bdev.o 00:02:20.227 CC lib/scsi/scsi_pr.o 00:02:20.227 CC lib/scsi/scsi_rpc.o 00:02:20.227 CC lib/scsi/task.o 00:02:20.227 CC lib/nbd/nbd.o 00:02:20.227 CC lib/nvmf/ctrlr.o 00:02:20.227 CC lib/nbd/nbd_rpc.o 00:02:20.227 CC lib/ftl/ftl_core.o 00:02:20.227 CC lib/nvmf/ctrlr_discovery.o 00:02:20.227 CC lib/ftl/ftl_init.o 00:02:20.227 CC lib/nvmf/ctrlr_bdev.o 00:02:20.227 CC lib/ftl/ftl_layout.o 00:02:20.227 CC lib/nvmf/subsystem.o 00:02:20.227 CC lib/ftl/ftl_debug.o 00:02:20.227 CC lib/nvmf/nvmf.o 00:02:20.227 CC lib/nvmf/nvmf_rpc.o 00:02:20.227 CC lib/ftl/ftl_io.o 00:02:20.227 CC lib/ftl/ftl_sb.o 00:02:20.227 CC lib/nvmf/transport.o 00:02:20.227 CC lib/ftl/ftl_l2p.o 00:02:20.227 CC lib/nvmf/tcp.o 00:02:20.227 CC lib/ftl/ftl_l2p_flat.o 00:02:20.227 CC lib/nvmf/stubs.o 00:02:20.227 CC lib/ftl/ftl_nv_cache.o 00:02:20.227 CC lib/nvmf/mdns_server.o 00:02:20.227 CC lib/ftl/ftl_band.o 00:02:20.227 CC lib/ftl/ftl_band_ops.o 00:02:20.227 CC lib/nvmf/vfio_user.o 00:02:20.227 CC lib/ftl/ftl_writer.o 00:02:20.227 CC lib/nvmf/rdma.o 00:02:20.227 CC lib/ftl/ftl_rq.o 00:02:20.227 CC lib/nvmf/auth.o 00:02:20.227 CC lib/ftl/ftl_reloc.o 00:02:20.227 CC lib/ftl/ftl_l2p_cache.o 00:02:20.227 CC lib/ftl/ftl_p2l.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:20.227 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:20.227 CC lib/ftl/utils/ftl_conf.o 00:02:20.227 CC lib/ftl/utils/ftl_md.o 00:02:20.227 CC lib/ftl/utils/ftl_mempool.o 00:02:20.227 CC lib/ftl/utils/ftl_bitmap.o 00:02:20.227 CC lib/ftl/utils/ftl_property.o 00:02:20.227 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:20.227 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:20.227 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:20.227 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:20.227 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:20.227 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:20.227 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:20.227 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:20.227 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:20.227 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:20.227 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:20.227 CC lib/ftl/base/ftl_base_bdev.o 00:02:20.227 CC lib/ftl/base/ftl_base_dev.o 00:02:20.227 CC lib/ftl/ftl_trace.o 00:02:20.798 LIB libspdk_nbd.a 00:02:20.798 LIB libspdk_scsi.a 00:02:20.798 SO libspdk_nbd.so.7.0 00:02:20.798 SO libspdk_scsi.so.9.0 00:02:20.798 SYMLINK libspdk_nbd.so 00:02:20.798 LIB libspdk_ublk.a 00:02:20.798 SYMLINK libspdk_scsi.so 00:02:20.798 SO libspdk_ublk.so.3.0 00:02:21.060 SYMLINK libspdk_ublk.so 00:02:21.060 LIB libspdk_ftl.a 00:02:21.321 CC lib/vhost/vhost.o 00:02:21.321 CC lib/vhost/vhost_rpc.o 00:02:21.321 CC lib/vhost/vhost_scsi.o 00:02:21.321 CC lib/vhost/vhost_blk.o 00:02:21.321 CC lib/iscsi/conn.o 00:02:21.321 CC lib/vhost/rte_vhost_user.o 00:02:21.321 CC lib/iscsi/init_grp.o 00:02:21.321 CC lib/iscsi/iscsi.o 00:02:21.321 CC lib/iscsi/md5.o 00:02:21.321 CC lib/iscsi/portal_grp.o 00:02:21.321 CC lib/iscsi/param.o 00:02:21.321 CC lib/iscsi/tgt_node.o 00:02:21.322 CC lib/iscsi/iscsi_subsystem.o 00:02:21.322 CC lib/iscsi/iscsi_rpc.o 00:02:21.322 CC lib/iscsi/task.o 00:02:21.322 SO libspdk_ftl.so.9.0 00:02:21.892 SYMLINK libspdk_ftl.so 00:02:22.152 LIB libspdk_nvmf.a 00:02:22.152 SO libspdk_nvmf.so.18.1 00:02:22.152 LIB libspdk_vhost.a 00:02:22.152 SO libspdk_vhost.so.8.0 00:02:22.413 SYMLINK libspdk_nvmf.so 00:02:22.413 SYMLINK libspdk_vhost.so 00:02:22.413 LIB libspdk_iscsi.a 00:02:22.413 SO libspdk_iscsi.so.8.0 00:02:22.674 SYMLINK libspdk_iscsi.so 00:02:23.245 CC module/vfu_device/vfu_virtio.o 00:02:23.245 CC module/vfu_device/vfu_virtio_blk.o 00:02:23.245 CC module/vfu_device/vfu_virtio_scsi.o 00:02:23.245 CC module/vfu_device/vfu_virtio_rpc.o 00:02:23.245 CC module/env_dpdk/env_dpdk_rpc.o 00:02:23.506 LIB libspdk_env_dpdk_rpc.a 00:02:23.506 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:23.506 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:23.506 CC module/accel/error/accel_error.o 00:02:23.506 CC module/accel/error/accel_error_rpc.o 00:02:23.506 CC module/scheduler/gscheduler/gscheduler.o 00:02:23.506 CC module/accel/ioat/accel_ioat_rpc.o 00:02:23.506 CC module/accel/ioat/accel_ioat.o 00:02:23.506 CC module/accel/iaa/accel_iaa.o 00:02:23.506 CC module/accel/iaa/accel_iaa_rpc.o 00:02:23.506 CC module/accel/dsa/accel_dsa.o 00:02:23.506 CC module/accel/dsa/accel_dsa_rpc.o 00:02:23.506 CC module/keyring/linux/keyring.o 00:02:23.506 CC module/sock/posix/posix.o 00:02:23.506 CC module/keyring/linux/keyring_rpc.o 00:02:23.506 CC module/keyring/file/keyring.o 00:02:23.506 CC module/blob/bdev/blob_bdev.o 00:02:23.506 CC module/keyring/file/keyring_rpc.o 00:02:23.506 SO libspdk_env_dpdk_rpc.so.6.0 00:02:23.506 SYMLINK libspdk_env_dpdk_rpc.so 00:02:23.506 LIB libspdk_scheduler_dpdk_governor.a 00:02:23.506 LIB libspdk_keyring_linux.a 00:02:23.506 LIB libspdk_scheduler_gscheduler.a 00:02:23.506 LIB libspdk_accel_error.a 00:02:23.506 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:23.506 LIB libspdk_keyring_file.a 00:02:23.506 LIB libspdk_scheduler_dynamic.a 00:02:23.506 SO libspdk_scheduler_gscheduler.so.4.0 00:02:23.506 SO libspdk_keyring_linux.so.1.0 00:02:23.767 LIB libspdk_accel_ioat.a 00:02:23.767 SO libspdk_accel_error.so.2.0 00:02:23.767 SO libspdk_keyring_file.so.1.0 00:02:23.767 LIB libspdk_accel_iaa.a 00:02:23.767 SO libspdk_scheduler_dynamic.so.4.0 00:02:23.767 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:23.767 SO libspdk_accel_ioat.so.6.0 00:02:23.767 LIB libspdk_blob_bdev.a 00:02:23.767 LIB libspdk_accel_dsa.a 00:02:23.767 SO libspdk_accel_iaa.so.3.0 00:02:23.767 SYMLINK libspdk_scheduler_gscheduler.so 00:02:23.767 SYMLINK libspdk_keyring_linux.so 00:02:23.767 SYMLINK libspdk_accel_error.so 00:02:23.767 SYMLINK libspdk_keyring_file.so 00:02:23.767 SO libspdk_accel_dsa.so.5.0 00:02:23.767 SO libspdk_blob_bdev.so.11.0 00:02:23.767 SYMLINK libspdk_scheduler_dynamic.so 00:02:23.767 SYMLINK libspdk_accel_ioat.so 00:02:23.767 LIB libspdk_vfu_device.a 00:02:23.767 SYMLINK libspdk_accel_iaa.so 00:02:23.767 SYMLINK libspdk_accel_dsa.so 00:02:23.767 SYMLINK libspdk_blob_bdev.so 00:02:23.767 SO libspdk_vfu_device.so.3.0 00:02:24.029 SYMLINK libspdk_vfu_device.so 00:02:24.029 LIB libspdk_sock_posix.a 00:02:24.289 SO libspdk_sock_posix.so.6.0 00:02:24.289 SYMLINK libspdk_sock_posix.so 00:02:24.290 CC module/bdev/delay/vbdev_delay.o 00:02:24.290 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:24.290 CC module/bdev/lvol/vbdev_lvol.o 00:02:24.290 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:24.290 CC module/bdev/error/vbdev_error.o 00:02:24.290 CC module/blobfs/bdev/blobfs_bdev.o 00:02:24.290 CC module/bdev/error/vbdev_error_rpc.o 00:02:24.290 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:24.290 CC module/bdev/passthru/vbdev_passthru.o 00:02:24.290 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:24.290 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:24.290 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:24.290 CC module/bdev/gpt/gpt.o 00:02:24.290 CC module/bdev/gpt/vbdev_gpt.o 00:02:24.290 CC module/bdev/raid/bdev_raid.o 00:02:24.290 CC module/bdev/raid/bdev_raid_sb.o 00:02:24.290 CC module/bdev/raid/bdev_raid_rpc.o 00:02:24.290 CC module/bdev/malloc/bdev_malloc.o 00:02:24.290 CC module/bdev/ftl/bdev_ftl.o 00:02:24.290 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:24.290 CC module/bdev/raid/raid0.o 00:02:24.290 CC module/bdev/null/bdev_null.o 00:02:24.290 CC module/bdev/raid/concat.o 00:02:24.290 CC module/bdev/raid/raid1.o 00:02:24.290 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:24.290 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:24.290 CC module/bdev/null/bdev_null_rpc.o 00:02:24.290 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:24.290 CC module/bdev/split/vbdev_split.o 00:02:24.290 CC module/bdev/aio/bdev_aio.o 00:02:24.290 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:24.290 CC module/bdev/split/vbdev_split_rpc.o 00:02:24.290 CC module/bdev/aio/bdev_aio_rpc.o 00:02:24.290 CC module/bdev/nvme/bdev_nvme.o 00:02:24.290 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:24.290 CC module/bdev/nvme/nvme_rpc.o 00:02:24.290 CC module/bdev/nvme/bdev_mdns_client.o 00:02:24.290 CC module/bdev/iscsi/bdev_iscsi.o 00:02:24.290 CC module/bdev/nvme/vbdev_opal.o 00:02:24.290 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:24.290 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:24.290 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:24.552 LIB libspdk_blobfs_bdev.a 00:02:24.552 SO libspdk_blobfs_bdev.so.6.0 00:02:24.552 LIB libspdk_bdev_ftl.a 00:02:24.552 LIB libspdk_bdev_error.a 00:02:24.552 LIB libspdk_bdev_split.a 00:02:24.552 SO libspdk_bdev_ftl.so.6.0 00:02:24.552 LIB libspdk_bdev_gpt.a 00:02:24.552 SO libspdk_bdev_error.so.6.0 00:02:24.552 LIB libspdk_bdev_null.a 00:02:24.552 SO libspdk_bdev_split.so.6.0 00:02:24.813 LIB libspdk_bdev_passthru.a 00:02:24.813 SYMLINK libspdk_blobfs_bdev.so 00:02:24.813 SO libspdk_bdev_gpt.so.6.0 00:02:24.813 LIB libspdk_bdev_zone_block.a 00:02:24.813 SO libspdk_bdev_null.so.6.0 00:02:24.813 SO libspdk_bdev_passthru.so.6.0 00:02:24.813 LIB libspdk_bdev_delay.a 00:02:24.813 LIB libspdk_bdev_aio.a 00:02:24.813 SYMLINK libspdk_bdev_error.so 00:02:24.813 SYMLINK libspdk_bdev_ftl.so 00:02:24.813 LIB libspdk_bdev_malloc.a 00:02:24.813 SO libspdk_bdev_zone_block.so.6.0 00:02:24.813 SYMLINK libspdk_bdev_split.so 00:02:24.813 SYMLINK libspdk_bdev_gpt.so 00:02:24.813 SO libspdk_bdev_aio.so.6.0 00:02:24.813 SO libspdk_bdev_delay.so.6.0 00:02:24.813 LIB libspdk_bdev_iscsi.a 00:02:24.813 SYMLINK libspdk_bdev_passthru.so 00:02:24.813 SYMLINK libspdk_bdev_null.so 00:02:24.813 SO libspdk_bdev_malloc.so.6.0 00:02:24.813 SYMLINK libspdk_bdev_zone_block.so 00:02:24.813 SO libspdk_bdev_iscsi.so.6.0 00:02:24.813 SYMLINK libspdk_bdev_aio.so 00:02:24.813 LIB libspdk_bdev_lvol.a 00:02:24.813 SYMLINK libspdk_bdev_delay.so 00:02:24.813 SYMLINK libspdk_bdev_malloc.so 00:02:24.813 LIB libspdk_bdev_virtio.a 00:02:24.813 SO libspdk_bdev_lvol.so.6.0 00:02:24.813 SYMLINK libspdk_bdev_iscsi.so 00:02:24.813 SO libspdk_bdev_virtio.so.6.0 00:02:25.075 SYMLINK libspdk_bdev_lvol.so 00:02:25.075 SYMLINK libspdk_bdev_virtio.so 00:02:25.075 LIB libspdk_bdev_raid.a 00:02:25.337 SO libspdk_bdev_raid.so.6.0 00:02:25.337 SYMLINK libspdk_bdev_raid.so 00:02:26.279 LIB libspdk_bdev_nvme.a 00:02:26.279 SO libspdk_bdev_nvme.so.7.0 00:02:26.540 SYMLINK libspdk_bdev_nvme.so 00:02:27.111 CC module/event/subsystems/vmd/vmd.o 00:02:27.111 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:27.111 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:27.111 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:27.111 CC module/event/subsystems/scheduler/scheduler.o 00:02:27.111 CC module/event/subsystems/iobuf/iobuf.o 00:02:27.111 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:27.111 CC module/event/subsystems/keyring/keyring.o 00:02:27.111 CC module/event/subsystems/sock/sock.o 00:02:27.372 LIB libspdk_event_keyring.a 00:02:27.372 LIB libspdk_event_vmd.a 00:02:27.372 LIB libspdk_event_vfu_tgt.a 00:02:27.372 LIB libspdk_event_vhost_blk.a 00:02:27.372 LIB libspdk_event_iobuf.a 00:02:27.372 LIB libspdk_event_scheduler.a 00:02:27.372 LIB libspdk_event_sock.a 00:02:27.373 SO libspdk_event_vhost_blk.so.3.0 00:02:27.373 SO libspdk_event_keyring.so.1.0 00:02:27.373 SO libspdk_event_vmd.so.6.0 00:02:27.373 SO libspdk_event_vfu_tgt.so.3.0 00:02:27.373 SO libspdk_event_scheduler.so.4.0 00:02:27.373 SO libspdk_event_iobuf.so.3.0 00:02:27.373 SO libspdk_event_sock.so.5.0 00:02:27.373 SYMLINK libspdk_event_keyring.so 00:02:27.373 SYMLINK libspdk_event_vhost_blk.so 00:02:27.373 SYMLINK libspdk_event_vfu_tgt.so 00:02:27.373 SYMLINK libspdk_event_vmd.so 00:02:27.373 SYMLINK libspdk_event_scheduler.so 00:02:27.373 SYMLINK libspdk_event_iobuf.so 00:02:27.373 SYMLINK libspdk_event_sock.so 00:02:27.953 CC module/event/subsystems/accel/accel.o 00:02:27.953 LIB libspdk_event_accel.a 00:02:27.953 SO libspdk_event_accel.so.6.0 00:02:27.953 SYMLINK libspdk_event_accel.so 00:02:28.526 CC module/event/subsystems/bdev/bdev.o 00:02:28.526 LIB libspdk_event_bdev.a 00:02:28.526 SO libspdk_event_bdev.so.6.0 00:02:28.788 SYMLINK libspdk_event_bdev.so 00:02:29.048 CC module/event/subsystems/nbd/nbd.o 00:02:29.048 CC module/event/subsystems/scsi/scsi.o 00:02:29.048 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:29.048 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:29.048 CC module/event/subsystems/ublk/ublk.o 00:02:29.309 LIB libspdk_event_nbd.a 00:02:29.309 LIB libspdk_event_scsi.a 00:02:29.309 LIB libspdk_event_ublk.a 00:02:29.309 SO libspdk_event_ublk.so.3.0 00:02:29.309 SO libspdk_event_nbd.so.6.0 00:02:29.309 SO libspdk_event_scsi.so.6.0 00:02:29.309 LIB libspdk_event_nvmf.a 00:02:29.309 SYMLINK libspdk_event_ublk.so 00:02:29.309 SYMLINK libspdk_event_nbd.so 00:02:29.309 SYMLINK libspdk_event_scsi.so 00:02:29.309 SO libspdk_event_nvmf.so.6.0 00:02:29.309 SYMLINK libspdk_event_nvmf.so 00:02:29.571 CC module/event/subsystems/iscsi/iscsi.o 00:02:29.571 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:29.831 LIB libspdk_event_iscsi.a 00:02:29.831 LIB libspdk_event_vhost_scsi.a 00:02:29.831 SO libspdk_event_iscsi.so.6.0 00:02:29.831 SO libspdk_event_vhost_scsi.so.3.0 00:02:29.831 SYMLINK libspdk_event_iscsi.so 00:02:29.831 SYMLINK libspdk_event_vhost_scsi.so 00:02:30.092 SO libspdk.so.6.0 00:02:30.092 SYMLINK libspdk.so 00:02:30.665 TEST_HEADER include/spdk/accel.h 00:02:30.665 TEST_HEADER include/spdk/accel_module.h 00:02:30.665 TEST_HEADER include/spdk/barrier.h 00:02:30.665 TEST_HEADER include/spdk/assert.h 00:02:30.665 TEST_HEADER include/spdk/base64.h 00:02:30.665 CXX app/trace/trace.o 00:02:30.665 TEST_HEADER include/spdk/bdev_module.h 00:02:30.665 TEST_HEADER include/spdk/bdev.h 00:02:30.665 TEST_HEADER include/spdk/bdev_zone.h 00:02:30.665 CC app/spdk_nvme_perf/perf.o 00:02:30.665 TEST_HEADER include/spdk/bit_array.h 00:02:30.665 TEST_HEADER include/spdk/bit_pool.h 00:02:30.665 TEST_HEADER include/spdk/blob_bdev.h 00:02:30.665 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:30.665 TEST_HEADER include/spdk/blobfs.h 00:02:30.665 TEST_HEADER include/spdk/blob.h 00:02:30.665 TEST_HEADER include/spdk/conf.h 00:02:30.665 CC app/spdk_nvme_discover/discovery_aer.o 00:02:30.665 TEST_HEADER include/spdk/cpuset.h 00:02:30.665 TEST_HEADER include/spdk/config.h 00:02:30.665 CC app/spdk_lspci/spdk_lspci.o 00:02:30.665 TEST_HEADER include/spdk/crc16.h 00:02:30.665 TEST_HEADER include/spdk/crc32.h 00:02:30.665 CC app/spdk_nvme_identify/identify.o 00:02:30.665 CC test/rpc_client/rpc_client_test.o 00:02:30.665 TEST_HEADER include/spdk/crc64.h 00:02:30.665 TEST_HEADER include/spdk/dif.h 00:02:30.665 CC app/trace_record/trace_record.o 00:02:30.665 CC app/spdk_top/spdk_top.o 00:02:30.665 TEST_HEADER include/spdk/dma.h 00:02:30.665 TEST_HEADER include/spdk/endian.h 00:02:30.665 TEST_HEADER include/spdk/env_dpdk.h 00:02:30.665 TEST_HEADER include/spdk/env.h 00:02:30.665 TEST_HEADER include/spdk/event.h 00:02:30.665 TEST_HEADER include/spdk/fd.h 00:02:30.665 TEST_HEADER include/spdk/fd_group.h 00:02:30.665 TEST_HEADER include/spdk/file.h 00:02:30.665 TEST_HEADER include/spdk/ftl.h 00:02:30.665 TEST_HEADER include/spdk/gpt_spec.h 00:02:30.665 TEST_HEADER include/spdk/hexlify.h 00:02:30.665 TEST_HEADER include/spdk/histogram_data.h 00:02:30.665 TEST_HEADER include/spdk/idxd.h 00:02:30.665 TEST_HEADER include/spdk/idxd_spec.h 00:02:30.665 TEST_HEADER include/spdk/init.h 00:02:30.665 TEST_HEADER include/spdk/ioat_spec.h 00:02:30.665 TEST_HEADER include/spdk/ioat.h 00:02:30.665 TEST_HEADER include/spdk/iscsi_spec.h 00:02:30.665 TEST_HEADER include/spdk/json.h 00:02:30.665 TEST_HEADER include/spdk/jsonrpc.h 00:02:30.665 TEST_HEADER include/spdk/keyring.h 00:02:30.665 TEST_HEADER include/spdk/keyring_module.h 00:02:30.665 CC app/spdk_dd/spdk_dd.o 00:02:30.665 TEST_HEADER include/spdk/likely.h 00:02:30.665 TEST_HEADER include/spdk/log.h 00:02:30.665 TEST_HEADER include/spdk/lvol.h 00:02:30.665 TEST_HEADER include/spdk/mmio.h 00:02:30.665 TEST_HEADER include/spdk/memory.h 00:02:30.665 TEST_HEADER include/spdk/nbd.h 00:02:30.665 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:30.665 TEST_HEADER include/spdk/notify.h 00:02:30.665 TEST_HEADER include/spdk/nvme_intel.h 00:02:30.665 TEST_HEADER include/spdk/nvme.h 00:02:30.665 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:30.665 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:30.665 CC app/iscsi_tgt/iscsi_tgt.o 00:02:30.665 TEST_HEADER include/spdk/nvme_spec.h 00:02:30.665 TEST_HEADER include/spdk/nvme_zns.h 00:02:30.665 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:30.665 TEST_HEADER include/spdk/nvmf.h 00:02:30.665 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:30.665 TEST_HEADER include/spdk/nvmf_transport.h 00:02:30.665 TEST_HEADER include/spdk/nvmf_spec.h 00:02:30.665 TEST_HEADER include/spdk/opal.h 00:02:30.665 TEST_HEADER include/spdk/pci_ids.h 00:02:30.665 TEST_HEADER include/spdk/opal_spec.h 00:02:30.665 TEST_HEADER include/spdk/pipe.h 00:02:30.665 TEST_HEADER include/spdk/queue.h 00:02:30.665 CC app/nvmf_tgt/nvmf_main.o 00:02:30.665 TEST_HEADER include/spdk/reduce.h 00:02:30.665 TEST_HEADER include/spdk/rpc.h 00:02:30.665 TEST_HEADER include/spdk/scheduler.h 00:02:30.665 TEST_HEADER include/spdk/scsi.h 00:02:30.665 CC app/spdk_tgt/spdk_tgt.o 00:02:30.665 TEST_HEADER include/spdk/scsi_spec.h 00:02:30.665 TEST_HEADER include/spdk/sock.h 00:02:30.665 TEST_HEADER include/spdk/stdinc.h 00:02:30.665 TEST_HEADER include/spdk/string.h 00:02:30.665 TEST_HEADER include/spdk/trace.h 00:02:30.665 TEST_HEADER include/spdk/thread.h 00:02:30.665 TEST_HEADER include/spdk/trace_parser.h 00:02:30.665 TEST_HEADER include/spdk/tree.h 00:02:30.665 TEST_HEADER include/spdk/ublk.h 00:02:30.665 TEST_HEADER include/spdk/util.h 00:02:30.665 TEST_HEADER include/spdk/version.h 00:02:30.665 TEST_HEADER include/spdk/uuid.h 00:02:30.665 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:30.665 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:30.665 TEST_HEADER include/spdk/vhost.h 00:02:30.665 TEST_HEADER include/spdk/vmd.h 00:02:30.665 TEST_HEADER include/spdk/xor.h 00:02:30.665 TEST_HEADER include/spdk/zipf.h 00:02:30.665 CXX test/cpp_headers/assert.o 00:02:30.665 CXX test/cpp_headers/accel.o 00:02:30.665 CXX test/cpp_headers/accel_module.o 00:02:30.665 CXX test/cpp_headers/base64.o 00:02:30.665 CXX test/cpp_headers/barrier.o 00:02:30.665 CXX test/cpp_headers/bdev.o 00:02:30.665 CXX test/cpp_headers/bdev_module.o 00:02:30.665 CXX test/cpp_headers/bdev_zone.o 00:02:30.665 CXX test/cpp_headers/bit_array.o 00:02:30.665 CXX test/cpp_headers/blob_bdev.o 00:02:30.665 CXX test/cpp_headers/bit_pool.o 00:02:30.665 CXX test/cpp_headers/blobfs_bdev.o 00:02:30.665 CXX test/cpp_headers/blobfs.o 00:02:30.665 CXX test/cpp_headers/blob.o 00:02:30.665 CXX test/cpp_headers/config.o 00:02:30.665 CXX test/cpp_headers/conf.o 00:02:30.665 CXX test/cpp_headers/cpuset.o 00:02:30.665 CXX test/cpp_headers/crc16.o 00:02:30.665 CXX test/cpp_headers/crc32.o 00:02:30.665 CXX test/cpp_headers/crc64.o 00:02:30.665 CXX test/cpp_headers/dif.o 00:02:30.665 CXX test/cpp_headers/env_dpdk.o 00:02:30.665 CXX test/cpp_headers/dma.o 00:02:30.665 CXX test/cpp_headers/endian.o 00:02:30.665 CXX test/cpp_headers/event.o 00:02:30.665 CXX test/cpp_headers/env.o 00:02:30.665 CXX test/cpp_headers/fd.o 00:02:30.665 CXX test/cpp_headers/fd_group.o 00:02:30.665 CXX test/cpp_headers/ftl.o 00:02:30.665 CXX test/cpp_headers/file.o 00:02:30.665 CXX test/cpp_headers/hexlify.o 00:02:30.665 CXX test/cpp_headers/gpt_spec.o 00:02:30.665 CXX test/cpp_headers/idxd.o 00:02:30.665 CXX test/cpp_headers/histogram_data.o 00:02:30.665 CXX test/cpp_headers/idxd_spec.o 00:02:30.665 CXX test/cpp_headers/init.o 00:02:30.665 CXX test/cpp_headers/ioat_spec.o 00:02:30.665 CXX test/cpp_headers/ioat.o 00:02:30.665 CXX test/cpp_headers/iscsi_spec.o 00:02:30.665 CXX test/cpp_headers/keyring.o 00:02:30.665 CXX test/cpp_headers/json.o 00:02:30.665 CXX test/cpp_headers/jsonrpc.o 00:02:30.665 CXX test/cpp_headers/keyring_module.o 00:02:30.665 CXX test/cpp_headers/lvol.o 00:02:30.665 CXX test/cpp_headers/memory.o 00:02:30.665 CXX test/cpp_headers/likely.o 00:02:30.665 CXX test/cpp_headers/log.o 00:02:30.665 CXX test/cpp_headers/mmio.o 00:02:30.665 CXX test/cpp_headers/nbd.o 00:02:30.665 CXX test/cpp_headers/nvme.o 00:02:30.665 CXX test/cpp_headers/notify.o 00:02:30.665 CXX test/cpp_headers/nvme_intel.o 00:02:30.665 CXX test/cpp_headers/nvme_ocssd.o 00:02:30.665 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:30.665 CXX test/cpp_headers/nvme_spec.o 00:02:30.665 CXX test/cpp_headers/nvmf_cmd.o 00:02:30.665 CXX test/cpp_headers/nvme_zns.o 00:02:30.665 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:30.665 CXX test/cpp_headers/nvmf_spec.o 00:02:30.665 CXX test/cpp_headers/nvmf.o 00:02:30.665 CXX test/cpp_headers/nvmf_transport.o 00:02:30.665 CXX test/cpp_headers/opal.o 00:02:30.665 CXX test/cpp_headers/opal_spec.o 00:02:30.665 CXX test/cpp_headers/pipe.o 00:02:30.665 CXX test/cpp_headers/queue.o 00:02:30.665 CXX test/cpp_headers/reduce.o 00:02:30.665 CXX test/cpp_headers/pci_ids.o 00:02:30.665 CXX test/cpp_headers/scheduler.o 00:02:30.665 CXX test/cpp_headers/scsi.o 00:02:30.665 CXX test/cpp_headers/rpc.o 00:02:30.665 CXX test/cpp_headers/sock.o 00:02:30.665 CXX test/cpp_headers/scsi_spec.o 00:02:30.665 CXX test/cpp_headers/stdinc.o 00:02:30.665 CXX test/cpp_headers/thread.o 00:02:30.665 CXX test/cpp_headers/string.o 00:02:30.665 CXX test/cpp_headers/trace.o 00:02:30.665 CXX test/cpp_headers/tree.o 00:02:30.665 CXX test/cpp_headers/trace_parser.o 00:02:30.665 CXX test/cpp_headers/ublk.o 00:02:30.665 CXX test/cpp_headers/util.o 00:02:30.665 CXX test/cpp_headers/uuid.o 00:02:30.665 CXX test/cpp_headers/vfio_user_pci.o 00:02:30.665 CXX test/cpp_headers/version.o 00:02:30.665 CXX test/cpp_headers/vfio_user_spec.o 00:02:30.665 LINK spdk_lspci 00:02:30.665 CXX test/cpp_headers/vhost.o 00:02:30.665 CXX test/cpp_headers/xor.o 00:02:30.665 CXX test/cpp_headers/vmd.o 00:02:30.665 CXX test/cpp_headers/zipf.o 00:02:30.665 CC test/env/vtophys/vtophys.o 00:02:30.665 CC test/app/histogram_perf/histogram_perf.o 00:02:30.665 CC test/app/stub/stub.o 00:02:30.665 CC test/app/jsoncat/jsoncat.o 00:02:30.665 CC test/thread/poller_perf/poller_perf.o 00:02:30.665 CC examples/util/zipf/zipf.o 00:02:30.927 CC examples/ioat/verify/verify.o 00:02:30.927 CC test/env/pci/pci_ut.o 00:02:30.927 CC examples/ioat/perf/perf.o 00:02:30.927 CC test/env/memory/memory_ut.o 00:02:30.927 CC app/fio/nvme/fio_plugin.o 00:02:30.927 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:30.927 CC test/app/bdev_svc/bdev_svc.o 00:02:30.927 CC test/dma/test_dma/test_dma.o 00:02:30.927 LINK spdk_nvme_discover 00:02:30.927 CC app/fio/bdev/fio_plugin.o 00:02:30.927 LINK rpc_client_test 00:02:30.927 LINK spdk_trace_record 00:02:30.927 LINK interrupt_tgt 00:02:31.262 CC test/env/mem_callbacks/mem_callbacks.o 00:02:31.262 LINK nvmf_tgt 00:02:31.262 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:31.262 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:31.262 LINK iscsi_tgt 00:02:31.262 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:31.262 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:31.262 LINK spdk_tgt 00:02:31.262 LINK ioat_perf 00:02:31.262 LINK vtophys 00:02:31.262 LINK jsoncat 00:02:31.262 LINK poller_perf 00:02:31.524 LINK spdk_dd 00:02:31.524 LINK histogram_perf 00:02:31.524 LINK stub 00:02:31.524 LINK zipf 00:02:31.524 LINK env_dpdk_post_init 00:02:31.524 LINK bdev_svc 00:02:31.524 LINK spdk_trace 00:02:31.524 LINK verify 00:02:31.524 LINK test_dma 00:02:31.524 LINK spdk_nvme_perf 00:02:31.524 LINK nvme_fuzz 00:02:31.524 LINK pci_ut 00:02:31.524 LINK vhost_fuzz 00:02:31.524 LINK spdk_top 00:02:31.784 LINK spdk_nvme 00:02:31.784 LINK spdk_bdev 00:02:31.784 LINK mem_callbacks 00:02:31.784 CC test/event/event_perf/event_perf.o 00:02:31.784 CC test/event/reactor_perf/reactor_perf.o 00:02:31.784 CC test/event/reactor/reactor.o 00:02:31.784 CC app/vhost/vhost.o 00:02:31.784 LINK spdk_nvme_identify 00:02:31.784 CC test/event/app_repeat/app_repeat.o 00:02:31.784 CC examples/vmd/led/led.o 00:02:31.784 CC test/event/scheduler/scheduler.o 00:02:31.784 CC examples/vmd/lsvmd/lsvmd.o 00:02:31.784 CC examples/idxd/perf/perf.o 00:02:31.784 CC examples/sock/hello_world/hello_sock.o 00:02:32.043 CC examples/thread/thread/thread_ex.o 00:02:32.043 LINK reactor_perf 00:02:32.043 LINK led 00:02:32.043 LINK event_perf 00:02:32.043 LINK reactor 00:02:32.043 CC test/nvme/aer/aer.o 00:02:32.043 CC test/nvme/connect_stress/connect_stress.o 00:02:32.043 CC test/nvme/fdp/fdp.o 00:02:32.043 CC test/nvme/startup/startup.o 00:02:32.043 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:32.043 CC test/nvme/e2edp/nvme_dp.o 00:02:32.043 CC test/nvme/sgl/sgl.o 00:02:32.043 CC test/nvme/err_injection/err_injection.o 00:02:32.043 CC test/nvme/reset/reset.o 00:02:32.043 CC test/nvme/boot_partition/boot_partition.o 00:02:32.043 CC test/nvme/reserve/reserve.o 00:02:32.043 CC test/nvme/simple_copy/simple_copy.o 00:02:32.043 CC test/nvme/overhead/overhead.o 00:02:32.043 CC test/nvme/compliance/nvme_compliance.o 00:02:32.043 LINK app_repeat 00:02:32.043 LINK lsvmd 00:02:32.043 CC test/nvme/fused_ordering/fused_ordering.o 00:02:32.043 CC test/nvme/cuse/cuse.o 00:02:32.043 CC test/blobfs/mkfs/mkfs.o 00:02:32.043 CC test/accel/dif/dif.o 00:02:32.043 LINK vhost 00:02:32.043 LINK scheduler 00:02:32.043 LINK hello_sock 00:02:32.043 CC test/lvol/esnap/esnap.o 00:02:32.304 LINK connect_stress 00:02:32.304 LINK idxd_perf 00:02:32.304 LINK memory_ut 00:02:32.304 LINK thread 00:02:32.304 LINK boot_partition 00:02:32.304 LINK doorbell_aers 00:02:32.304 LINK err_injection 00:02:32.304 LINK startup 00:02:32.304 LINK reserve 00:02:32.304 LINK aer 00:02:32.304 LINK nvme_dp 00:02:32.304 LINK fused_ordering 00:02:32.304 LINK simple_copy 00:02:32.304 LINK overhead 00:02:32.304 LINK mkfs 00:02:32.304 LINK reset 00:02:32.304 LINK sgl 00:02:32.304 LINK fdp 00:02:32.304 LINK nvme_compliance 00:02:32.564 LINK dif 00:02:32.564 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:32.564 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:32.564 LINK iscsi_fuzz 00:02:32.564 CC examples/nvme/hotplug/hotplug.o 00:02:32.564 CC examples/nvme/reconnect/reconnect.o 00:02:32.564 CC examples/nvme/hello_world/hello_world.o 00:02:32.564 CC examples/nvme/arbitration/arbitration.o 00:02:32.564 CC examples/nvme/abort/abort.o 00:02:32.564 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:32.824 CC examples/accel/perf/accel_perf.o 00:02:32.824 CC examples/blob/cli/blobcli.o 00:02:32.824 CC examples/blob/hello_world/hello_blob.o 00:02:32.824 LINK pmr_persistence 00:02:32.824 LINK cmb_copy 00:02:32.824 LINK hello_world 00:02:32.824 LINK hotplug 00:02:32.824 LINK arbitration 00:02:32.824 LINK reconnect 00:02:32.824 LINK abort 00:02:33.083 LINK nvme_manage 00:02:33.083 LINK hello_blob 00:02:33.083 CC test/bdev/bdevio/bdevio.o 00:02:33.083 LINK accel_perf 00:02:33.083 LINK cuse 00:02:33.343 LINK blobcli 00:02:33.343 LINK bdevio 00:02:33.605 CC examples/bdev/hello_world/hello_bdev.o 00:02:33.605 CC examples/bdev/bdevperf/bdevperf.o 00:02:33.866 LINK hello_bdev 00:02:34.437 LINK bdevperf 00:02:35.006 CC examples/nvmf/nvmf/nvmf.o 00:02:35.265 LINK nvmf 00:02:36.206 LINK esnap 00:02:36.778 00:02:36.778 real 0m50.905s 00:02:36.778 user 6m32.868s 00:02:36.778 sys 4m9.694s 00:02:36.778 20:16:28 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:36.778 20:16:28 make -- common/autotest_common.sh@10 -- $ set +x 00:02:36.778 ************************************ 00:02:36.778 END TEST make 00:02:36.778 ************************************ 00:02:36.778 20:16:28 -- common/autotest_common.sh@1142 -- $ return 0 00:02:36.778 20:16:28 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:36.778 20:16:28 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:36.778 20:16:28 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:36.778 20:16:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.778 20:16:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:36.778 20:16:28 -- pm/common@44 -- $ pid=979302 00:02:36.778 20:16:28 -- pm/common@50 -- $ kill -TERM 979302 00:02:36.778 20:16:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.778 20:16:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:36.778 20:16:28 -- pm/common@44 -- $ pid=979303 00:02:36.778 20:16:28 -- pm/common@50 -- $ kill -TERM 979303 00:02:36.778 20:16:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.778 20:16:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:36.778 20:16:28 -- pm/common@44 -- $ pid=979305 00:02:36.778 20:16:28 -- pm/common@50 -- $ kill -TERM 979305 00:02:36.778 20:16:28 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.778 20:16:28 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:36.778 20:16:28 -- pm/common@44 -- $ pid=979328 00:02:36.778 20:16:28 -- pm/common@50 -- $ sudo -E kill -TERM 979328 00:02:36.778 20:16:29 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:36.778 20:16:29 -- nvmf/common.sh@7 -- # uname -s 00:02:36.778 20:16:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:36.778 20:16:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:36.778 20:16:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:36.778 20:16:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:36.778 20:16:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:36.778 20:16:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:36.778 20:16:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:36.779 20:16:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:36.779 20:16:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:36.779 20:16:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:36.779 20:16:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:36.779 20:16:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:02:36.779 20:16:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:36.779 20:16:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:36.779 20:16:29 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:36.779 20:16:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:36.779 20:16:29 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:36.779 20:16:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:36.779 20:16:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:36.779 20:16:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:36.779 20:16:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.779 20:16:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.779 20:16:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.779 20:16:29 -- paths/export.sh@5 -- # export PATH 00:02:36.779 20:16:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:36.779 20:16:29 -- nvmf/common.sh@47 -- # : 0 00:02:36.779 20:16:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:36.779 20:16:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:36.779 20:16:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:36.779 20:16:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:36.779 20:16:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:36.779 20:16:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:36.779 20:16:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:36.779 20:16:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:36.779 20:16:29 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:36.779 20:16:29 -- spdk/autotest.sh@32 -- # uname -s 00:02:36.779 20:16:29 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:36.779 20:16:29 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:36.779 20:16:29 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.779 20:16:29 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:36.779 20:16:29 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:36.779 20:16:29 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:36.779 20:16:29 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:36.779 20:16:29 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:36.779 20:16:29 -- spdk/autotest.sh@48 -- # udevadm_pid=1042629 00:02:36.779 20:16:29 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:36.779 20:16:29 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:36.779 20:16:29 -- pm/common@17 -- # local monitor 00:02:36.779 20:16:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.779 20:16:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.779 20:16:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.779 20:16:29 -- pm/common@21 -- # date +%s 00:02:36.779 20:16:29 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:36.779 20:16:29 -- pm/common@25 -- # sleep 1 00:02:36.779 20:16:29 -- pm/common@21 -- # date +%s 00:02:36.779 20:16:29 -- pm/common@21 -- # date +%s 00:02:36.779 20:16:29 -- pm/common@21 -- # date +%s 00:02:36.779 20:16:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067389 00:02:36.779 20:16:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067389 00:02:36.779 20:16:29 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067389 00:02:36.779 20:16:29 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721067389 00:02:37.038 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067389_collect-vmstat.pm.log 00:02:37.038 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067389_collect-cpu-load.pm.log 00:02:37.038 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067389_collect-cpu-temp.pm.log 00:02:37.038 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721067389_collect-bmc-pm.bmc.pm.log 00:02:37.978 20:16:30 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:37.978 20:16:30 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:37.978 20:16:30 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:37.978 20:16:30 -- common/autotest_common.sh@10 -- # set +x 00:02:37.978 20:16:30 -- spdk/autotest.sh@59 -- # create_test_list 00:02:37.978 20:16:30 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:37.978 20:16:30 -- common/autotest_common.sh@10 -- # set +x 00:02:37.978 20:16:30 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:37.978 20:16:30 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.978 20:16:30 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.978 20:16:30 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:37.978 20:16:30 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:37.978 20:16:30 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:37.978 20:16:30 -- common/autotest_common.sh@1455 -- # uname 00:02:37.978 20:16:30 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:37.978 20:16:30 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:37.978 20:16:30 -- common/autotest_common.sh@1475 -- # uname 00:02:37.978 20:16:30 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:37.978 20:16:30 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:37.978 20:16:30 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:37.979 20:16:30 -- spdk/autotest.sh@72 -- # hash lcov 00:02:37.979 20:16:30 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:37.979 20:16:30 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:37.979 --rc lcov_branch_coverage=1 00:02:37.979 --rc lcov_function_coverage=1 00:02:37.979 --rc genhtml_branch_coverage=1 00:02:37.979 --rc genhtml_function_coverage=1 00:02:37.979 --rc genhtml_legend=1 00:02:37.979 --rc geninfo_all_blocks=1 00:02:37.979 ' 00:02:37.979 20:16:30 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:37.979 --rc lcov_branch_coverage=1 00:02:37.979 --rc lcov_function_coverage=1 00:02:37.979 --rc genhtml_branch_coverage=1 00:02:37.979 --rc genhtml_function_coverage=1 00:02:37.979 --rc genhtml_legend=1 00:02:37.979 --rc geninfo_all_blocks=1 00:02:37.979 ' 00:02:37.979 20:16:30 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:37.979 --rc lcov_branch_coverage=1 00:02:37.979 --rc lcov_function_coverage=1 00:02:37.979 --rc genhtml_branch_coverage=1 00:02:37.979 --rc genhtml_function_coverage=1 00:02:37.979 --rc genhtml_legend=1 00:02:37.979 --rc geninfo_all_blocks=1 00:02:37.979 --no-external' 00:02:37.979 20:16:30 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:37.979 --rc lcov_branch_coverage=1 00:02:37.979 --rc lcov_function_coverage=1 00:02:37.979 --rc genhtml_branch_coverage=1 00:02:37.979 --rc genhtml_function_coverage=1 00:02:37.979 --rc genhtml_legend=1 00:02:37.979 --rc geninfo_all_blocks=1 00:02:37.979 --no-external' 00:02:37.979 20:16:30 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:37.979 lcov: LCOV version 1.14 00:02:37.979 20:16:30 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:50.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:50.206 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:02.439 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:02.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:02.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:02.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:02.440 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:02.700 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:02.700 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:02.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:02.701 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:02.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:02.963 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:07.169 20:16:59 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:07.169 20:16:59 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:07.169 20:16:59 -- common/autotest_common.sh@10 -- # set +x 00:03:07.169 20:16:59 -- spdk/autotest.sh@91 -- # rm -f 00:03:07.169 20:16:59 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.371 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:65:00.0 (144d a80a): Already using the nvme driver 00:03:11.371 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:03:11.371 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:03:11.371 20:17:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:11.371 20:17:03 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:11.371 20:17:03 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:11.371 20:17:03 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:11.371 20:17:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:11.371 20:17:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:11.371 20:17:03 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:11.371 20:17:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:11.371 20:17:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:11.371 20:17:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:11.371 20:17:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:11.371 20:17:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:11.371 20:17:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:03:11.371 20:17:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:03:11.371 20:17:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:11.372 No valid GPT data, bailing 00:03:11.372 20:17:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:11.372 20:17:03 -- scripts/common.sh@391 -- # pt= 00:03:11.372 20:17:03 -- scripts/common.sh@392 -- # return 1 00:03:11.372 20:17:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:11.372 1+0 records in 00:03:11.372 1+0 records out 00:03:11.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00333396 s, 315 MB/s 00:03:11.372 20:17:03 -- spdk/autotest.sh@118 -- # sync 00:03:11.372 20:17:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:11.372 20:17:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:11.372 20:17:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:19.506 20:17:11 -- spdk/autotest.sh@124 -- # uname -s 00:03:19.506 20:17:11 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:19.506 20:17:11 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:19.506 20:17:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.507 20:17:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.507 20:17:11 -- common/autotest_common.sh@10 -- # set +x 00:03:19.507 ************************************ 00:03:19.507 START TEST setup.sh 00:03:19.507 ************************************ 00:03:19.507 20:17:11 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:19.507 * Looking for test storage... 00:03:19.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:19.507 20:17:11 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:19.507 20:17:11 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:19.507 20:17:11 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:19.507 20:17:11 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:19.507 20:17:11 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:19.507 20:17:11 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:19.507 ************************************ 00:03:19.507 START TEST acl 00:03:19.507 ************************************ 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:19.507 * Looking for test storage... 00:03:19.507 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:19.507 20:17:11 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:19.507 20:17:11 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:19.507 20:17:11 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:19.507 20:17:11 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:19.507 20:17:11 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:19.507 20:17:11 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:19.507 20:17:11 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:19.507 20:17:11 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:19.507 20:17:11 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:23.712 20:17:15 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:23.712 20:17:15 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:23.712 20:17:15 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:23.712 20:17:15 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:23.712 20:17:15 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:23.712 20:17:15 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:27.920 Hugepages 00:03:27.920 node hugesize free / total 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 00:03:27.920 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.0 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.1 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.2 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.3 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.4 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.5 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.6 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:01.7 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:65:00.0 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.0 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.1 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.2 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.3 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.4 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.5 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.6 == *:*:*.* ]] 00:03:27.920 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:01.7 == *:*:*.* ]] 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:27.921 20:17:19 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:27.921 20:17:19 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:27.921 20:17:19 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:27.921 20:17:19 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:27.921 ************************************ 00:03:27.921 START TEST denied 00:03:27.921 ************************************ 00:03:27.921 20:17:19 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:27.921 20:17:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:65:00.0' 00:03:27.921 20:17:19 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:27.921 20:17:19 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:65:00.0' 00:03:27.921 20:17:19 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:27.921 20:17:19 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:31.282 0000:65:00.0 (144d a80a): Skipping denied controller at 0000:65:00.0 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:65:00.0 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:65:00.0 ]] 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:65:00.0/driver 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:31.282 20:17:23 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:36.572 00:03:36.572 real 0m8.843s 00:03:36.572 user 0m2.914s 00:03:36.572 sys 0m5.257s 00:03:36.572 20:17:28 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:36.572 20:17:28 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:36.572 ************************************ 00:03:36.572 END TEST denied 00:03:36.572 ************************************ 00:03:36.572 20:17:28 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:36.572 20:17:28 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:36.572 20:17:28 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:36.572 20:17:28 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:36.572 20:17:28 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:36.572 ************************************ 00:03:36.572 START TEST allowed 00:03:36.572 ************************************ 00:03:36.572 20:17:28 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:36.572 20:17:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:65:00.0 00:03:36.572 20:17:28 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:36.572 20:17:28 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:65:00.0 .*: nvme -> .*' 00:03:36.572 20:17:28 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.572 20:17:28 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.172 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:43.172 20:17:34 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:43.172 20:17:34 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:43.172 20:17:34 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:43.172 20:17:34 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:43.172 20:17:34 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:46.472 00:03:46.472 real 0m9.805s 00:03:46.472 user 0m2.882s 00:03:46.472 sys 0m5.225s 00:03:46.472 20:17:38 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.472 20:17:38 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:46.472 ************************************ 00:03:46.472 END TEST allowed 00:03:46.472 ************************************ 00:03:46.472 20:17:38 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:46.472 00:03:46.472 real 0m26.828s 00:03:46.472 user 0m8.830s 00:03:46.472 sys 0m15.802s 00:03:46.472 20:17:38 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:46.472 20:17:38 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:46.472 ************************************ 00:03:46.472 END TEST acl 00:03:46.472 ************************************ 00:03:46.472 20:17:38 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:46.472 20:17:38 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:46.472 20:17:38 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.472 20:17:38 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.472 20:17:38 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:46.472 ************************************ 00:03:46.472 START TEST hugepages 00:03:46.472 ************************************ 00:03:46.472 20:17:38 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:46.472 * Looking for test storage... 00:03:46.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 106465064 kB' 'MemAvailable: 110196644 kB' 'Buffers: 4132 kB' 'Cached: 10635304 kB' 'SwapCached: 0 kB' 'Active: 7584876 kB' 'Inactive: 3701232 kB' 'Active(anon): 7093444 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 650476 kB' 'Mapped: 183608 kB' 'Shmem: 6446772 kB' 'KReclaimable: 579860 kB' 'Slab: 1457768 kB' 'SReclaimable: 579860 kB' 'SUnreclaim: 877908 kB' 'KernelStack: 27792 kB' 'PageTables: 9004 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69460876 kB' 'Committed_AS: 8706136 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237584 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.472 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:46.473 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:46.474 20:17:38 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:46.474 20:17:38 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:46.474 20:17:38 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:46.474 20:17:38 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:46.474 ************************************ 00:03:46.474 START TEST default_setup 00:03:46.474 ************************************ 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:46.474 20:17:38 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:50.684 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:03:50.684 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108647468 kB' 'MemAvailable: 112379016 kB' 'Buffers: 4132 kB' 'Cached: 10635440 kB' 'SwapCached: 0 kB' 'Active: 7597524 kB' 'Inactive: 3701232 kB' 'Active(anon): 7106092 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662244 kB' 'Mapped: 183364 kB' 'Shmem: 6446908 kB' 'KReclaimable: 579828 kB' 'Slab: 1455860 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 876032 kB' 'KernelStack: 27808 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8717560 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237804 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.684 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.685 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108647284 kB' 'MemAvailable: 112378832 kB' 'Buffers: 4132 kB' 'Cached: 10635444 kB' 'SwapCached: 0 kB' 'Active: 7597240 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105808 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661880 kB' 'Mapped: 183272 kB' 'Shmem: 6446912 kB' 'KReclaimable: 579828 kB' 'Slab: 1455852 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 876024 kB' 'KernelStack: 27728 kB' 'PageTables: 8928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8717580 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.686 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108647284 kB' 'MemAvailable: 112378832 kB' 'Buffers: 4132 kB' 'Cached: 10635460 kB' 'SwapCached: 0 kB' 'Active: 7597112 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105680 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662220 kB' 'Mapped: 183196 kB' 'Shmem: 6446928 kB' 'KReclaimable: 579828 kB' 'Slab: 1455844 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 876016 kB' 'KernelStack: 27792 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8717600 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.687 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.688 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:50.689 nr_hugepages=1024 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:50.689 resv_hugepages=0 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:50.689 surplus_hugepages=0 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:50.689 anon_hugepages=0 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108648644 kB' 'MemAvailable: 112380192 kB' 'Buffers: 4132 kB' 'Cached: 10635484 kB' 'SwapCached: 0 kB' 'Active: 7597144 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105712 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662224 kB' 'Mapped: 183196 kB' 'Shmem: 6446952 kB' 'KReclaimable: 579828 kB' 'Slab: 1455844 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 876016 kB' 'KernelStack: 27792 kB' 'PageTables: 9120 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8717624 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237788 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.689 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.690 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59438652 kB' 'MemUsed: 6220356 kB' 'SwapCached: 0 kB' 'Active: 1529376 kB' 'Inactive: 288480 kB' 'Active(anon): 1371628 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1666760 kB' 'Mapped: 58816 kB' 'AnonPages: 154368 kB' 'Shmem: 1220532 kB' 'KernelStack: 12584 kB' 'PageTables: 3528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 745284 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 420400 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.691 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:50.692 node0=1024 expecting 1024 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:50.692 00:03:50.692 real 0m4.166s 00:03:50.692 user 0m1.644s 00:03:50.692 sys 0m2.521s 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:50.692 20:17:42 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:50.692 ************************************ 00:03:50.692 END TEST default_setup 00:03:50.693 ************************************ 00:03:50.693 20:17:42 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:50.693 20:17:42 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:50.693 20:17:42 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:50.693 20:17:42 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:50.693 20:17:42 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:50.693 ************************************ 00:03:50.693 START TEST per_node_1G_alloc 00:03:50.693 ************************************ 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:50.693 20:17:42 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:54.908 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:54.908 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108678804 kB' 'MemAvailable: 112410352 kB' 'Buffers: 4132 kB' 'Cached: 10635600 kB' 'SwapCached: 0 kB' 'Active: 7596636 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105204 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 660956 kB' 'Mapped: 182084 kB' 'Shmem: 6447068 kB' 'KReclaimable: 579828 kB' 'Slab: 1455512 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875684 kB' 'KernelStack: 27840 kB' 'PageTables: 9240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8708112 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237884 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.908 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:54.909 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108680620 kB' 'MemAvailable: 112412168 kB' 'Buffers: 4132 kB' 'Cached: 10635604 kB' 'SwapCached: 0 kB' 'Active: 7597312 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105880 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661356 kB' 'Mapped: 182080 kB' 'Shmem: 6447072 kB' 'KReclaimable: 579828 kB' 'Slab: 1455404 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875576 kB' 'KernelStack: 27936 kB' 'PageTables: 9696 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8708132 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237948 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.910 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.911 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108679484 kB' 'MemAvailable: 112411032 kB' 'Buffers: 4132 kB' 'Cached: 10635620 kB' 'SwapCached: 0 kB' 'Active: 7596468 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105036 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661220 kB' 'Mapped: 181996 kB' 'Shmem: 6447088 kB' 'KReclaimable: 579828 kB' 'Slab: 1455380 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875552 kB' 'KernelStack: 27696 kB' 'PageTables: 9408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8707784 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237916 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.912 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:54.913 nr_hugepages=1024 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:54.913 resv_hugepages=0 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:54.913 surplus_hugepages=0 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:54.913 anon_hugepages=0 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.913 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108678140 kB' 'MemAvailable: 112409688 kB' 'Buffers: 4132 kB' 'Cached: 10635644 kB' 'SwapCached: 0 kB' 'Active: 7596792 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105360 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 661496 kB' 'Mapped: 181996 kB' 'Shmem: 6447112 kB' 'KReclaimable: 579828 kB' 'Slab: 1455376 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875548 kB' 'KernelStack: 27824 kB' 'PageTables: 9904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8707812 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237916 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.914 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60490112 kB' 'MemUsed: 5168896 kB' 'SwapCached: 0 kB' 'Active: 1528736 kB' 'Inactive: 288480 kB' 'Active(anon): 1370988 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1666872 kB' 'Mapped: 58060 kB' 'AnonPages: 153528 kB' 'Shmem: 1220644 kB' 'KernelStack: 12728 kB' 'PageTables: 4108 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 744916 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 420032 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.915 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:46 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.916 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48187176 kB' 'MemUsed: 12492664 kB' 'SwapCached: 0 kB' 'Active: 6068004 kB' 'Inactive: 3412752 kB' 'Active(anon): 5734320 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8972928 kB' 'Mapped: 123936 kB' 'AnonPages: 507884 kB' 'Shmem: 5226492 kB' 'KernelStack: 15112 kB' 'PageTables: 5468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254944 kB' 'Slab: 710460 kB' 'SReclaimable: 254944 kB' 'SUnreclaim: 455516 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.917 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:54.918 node0=512 expecting 512 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:54.918 node1=512 expecting 512 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:54.918 00:03:54.918 real 0m4.108s 00:03:54.918 user 0m1.671s 00:03:54.918 sys 0m2.506s 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.918 20:17:47 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:54.918 ************************************ 00:03:54.918 END TEST per_node_1G_alloc 00:03:54.918 ************************************ 00:03:54.918 20:17:47 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:54.918 20:17:47 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:54.918 20:17:47 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.918 20:17:47 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.918 20:17:47 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:54.918 ************************************ 00:03:54.918 START TEST even_2G_alloc 00:03:54.918 ************************************ 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:54.918 20:17:47 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:59.132 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:03:59.132 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108694128 kB' 'MemAvailable: 112425676 kB' 'Buffers: 4132 kB' 'Cached: 10635796 kB' 'SwapCached: 0 kB' 'Active: 7598176 kB' 'Inactive: 3701232 kB' 'Active(anon): 7106744 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662480 kB' 'Mapped: 182140 kB' 'Shmem: 6447264 kB' 'KReclaimable: 579828 kB' 'Slab: 1455352 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875524 kB' 'KernelStack: 27744 kB' 'PageTables: 8900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8706024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237724 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.132 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.133 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108694472 kB' 'MemAvailable: 112426020 kB' 'Buffers: 4132 kB' 'Cached: 10635800 kB' 'SwapCached: 0 kB' 'Active: 7597672 kB' 'Inactive: 3701232 kB' 'Active(anon): 7106240 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662052 kB' 'Mapped: 182120 kB' 'Shmem: 6447268 kB' 'KReclaimable: 579828 kB' 'Slab: 1455332 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875504 kB' 'KernelStack: 27744 kB' 'PageTables: 8904 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8706044 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237724 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.134 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.135 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108694224 kB' 'MemAvailable: 112425772 kB' 'Buffers: 4132 kB' 'Cached: 10635800 kB' 'SwapCached: 0 kB' 'Active: 7597184 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105752 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662036 kB' 'Mapped: 182044 kB' 'Shmem: 6447268 kB' 'KReclaimable: 579828 kB' 'Slab: 1455356 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875528 kB' 'KernelStack: 27744 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8706064 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237740 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.136 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:50 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:59.137 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:59.138 nr_hugepages=1024 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:59.138 resv_hugepages=0 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:59.138 surplus_hugepages=0 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:59.138 anon_hugepages=0 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108694840 kB' 'MemAvailable: 112426388 kB' 'Buffers: 4132 kB' 'Cached: 10635840 kB' 'SwapCached: 0 kB' 'Active: 7597228 kB' 'Inactive: 3701232 kB' 'Active(anon): 7105796 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 662036 kB' 'Mapped: 182044 kB' 'Shmem: 6447308 kB' 'KReclaimable: 579828 kB' 'Slab: 1455356 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875528 kB' 'KernelStack: 27744 kB' 'PageTables: 8896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8706088 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237740 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.138 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.139 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60510096 kB' 'MemUsed: 5148912 kB' 'SwapCached: 0 kB' 'Active: 1528492 kB' 'Inactive: 288480 kB' 'Active(anon): 1370744 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1667012 kB' 'Mapped: 58076 kB' 'AnonPages: 153288 kB' 'Shmem: 1220784 kB' 'KernelStack: 12584 kB' 'PageTables: 3468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 744872 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 419988 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.140 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48184820 kB' 'MemUsed: 12495020 kB' 'SwapCached: 0 kB' 'Active: 6068772 kB' 'Inactive: 3412752 kB' 'Active(anon): 5735088 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8972980 kB' 'Mapped: 123968 kB' 'AnonPages: 508748 kB' 'Shmem: 5226544 kB' 'KernelStack: 15160 kB' 'PageTables: 5428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254944 kB' 'Slab: 710484 kB' 'SReclaimable: 254944 kB' 'SUnreclaim: 455540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.141 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:59.142 node0=512 expecting 512 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:59.142 node1=512 expecting 512 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:59.142 00:03:59.142 real 0m3.980s 00:03:59.142 user 0m1.585s 00:03:59.142 sys 0m2.468s 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:59.142 20:17:51 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:59.142 ************************************ 00:03:59.142 END TEST even_2G_alloc 00:03:59.142 ************************************ 00:03:59.142 20:17:51 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:59.142 20:17:51 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:59.142 20:17:51 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:59.142 20:17:51 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:59.142 20:17:51 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:59.142 ************************************ 00:03:59.142 START TEST odd_alloc 00:03:59.142 ************************************ 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:59.142 20:17:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:59.143 20:17:51 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:03.372 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:03.372 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108713620 kB' 'MemAvailable: 112445168 kB' 'Buffers: 4132 kB' 'Cached: 10635976 kB' 'SwapCached: 0 kB' 'Active: 7599952 kB' 'Inactive: 3701232 kB' 'Active(anon): 7108520 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664204 kB' 'Mapped: 182204 kB' 'Shmem: 6447444 kB' 'KReclaimable: 579828 kB' 'Slab: 1455356 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875528 kB' 'KernelStack: 27840 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8709340 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237740 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.372 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108714356 kB' 'MemAvailable: 112445904 kB' 'Buffers: 4132 kB' 'Cached: 10635980 kB' 'SwapCached: 0 kB' 'Active: 7599568 kB' 'Inactive: 3701232 kB' 'Active(anon): 7108136 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663792 kB' 'Mapped: 182164 kB' 'Shmem: 6447448 kB' 'KReclaimable: 579828 kB' 'Slab: 1455348 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875520 kB' 'KernelStack: 27888 kB' 'PageTables: 9472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8709712 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237708 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.373 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.374 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108714616 kB' 'MemAvailable: 112446164 kB' 'Buffers: 4132 kB' 'Cached: 10635996 kB' 'SwapCached: 0 kB' 'Active: 7598760 kB' 'Inactive: 3701232 kB' 'Active(anon): 7107328 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663284 kB' 'Mapped: 182072 kB' 'Shmem: 6447464 kB' 'KReclaimable: 579828 kB' 'Slab: 1455332 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875504 kB' 'KernelStack: 27824 kB' 'PageTables: 9368 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8709732 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237740 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.375 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.376 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:03.377 nr_hugepages=1025 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:03.377 resv_hugepages=0 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:03.377 surplus_hugepages=0 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:03.377 anon_hugepages=0 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108717016 kB' 'MemAvailable: 112448564 kB' 'Buffers: 4132 kB' 'Cached: 10636016 kB' 'SwapCached: 0 kB' 'Active: 7599048 kB' 'Inactive: 3701232 kB' 'Active(anon): 7107616 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663184 kB' 'Mapped: 182088 kB' 'Shmem: 6447484 kB' 'KReclaimable: 579828 kB' 'Slab: 1455332 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875504 kB' 'KernelStack: 27872 kB' 'PageTables: 9212 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70508428 kB' 'Committed_AS: 8708040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237740 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.377 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.378 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60524980 kB' 'MemUsed: 5134028 kB' 'SwapCached: 0 kB' 'Active: 1530504 kB' 'Inactive: 288480 kB' 'Active(anon): 1372756 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1667124 kB' 'Mapped: 58096 kB' 'AnonPages: 155120 kB' 'Shmem: 1220896 kB' 'KernelStack: 12728 kB' 'PageTables: 3772 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 745196 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 420312 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.379 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 48192092 kB' 'MemUsed: 12487748 kB' 'SwapCached: 0 kB' 'Active: 6068180 kB' 'Inactive: 3412752 kB' 'Active(anon): 5734496 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8973068 kB' 'Mapped: 123976 kB' 'AnonPages: 508068 kB' 'Shmem: 5226632 kB' 'KernelStack: 15160 kB' 'PageTables: 5540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254944 kB' 'Slab: 710104 kB' 'SReclaimable: 254944 kB' 'SUnreclaim: 455160 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.380 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.381 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:03.382 node0=512 expecting 513 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:03.382 node1=513 expecting 512 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:03.382 00:04:03.382 real 0m4.044s 00:04:03.382 user 0m1.572s 00:04:03.382 sys 0m2.539s 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:03.382 20:17:55 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:03.382 ************************************ 00:04:03.382 END TEST odd_alloc 00:04:03.382 ************************************ 00:04:03.382 20:17:55 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:03.382 20:17:55 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:03.382 20:17:55 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:03.382 20:17:55 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:03.382 20:17:55 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:03.382 ************************************ 00:04:03.382 START TEST custom_alloc 00:04:03.382 ************************************ 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:03.382 20:17:55 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:06.688 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:06.688 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107675664 kB' 'MemAvailable: 111407212 kB' 'Buffers: 4132 kB' 'Cached: 10636148 kB' 'SwapCached: 0 kB' 'Active: 7599844 kB' 'Inactive: 3701232 kB' 'Active(anon): 7108412 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664028 kB' 'Mapped: 182124 kB' 'Shmem: 6447616 kB' 'KReclaimable: 579828 kB' 'Slab: 1455364 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875536 kB' 'KernelStack: 27840 kB' 'PageTables: 9132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8708980 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237916 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.688 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.689 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107676164 kB' 'MemAvailable: 111407712 kB' 'Buffers: 4132 kB' 'Cached: 10636152 kB' 'SwapCached: 0 kB' 'Active: 7599340 kB' 'Inactive: 3701232 kB' 'Active(anon): 7107908 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663520 kB' 'Mapped: 182160 kB' 'Shmem: 6447620 kB' 'KReclaimable: 579828 kB' 'Slab: 1455372 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875544 kB' 'KernelStack: 27712 kB' 'PageTables: 8924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8709000 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237772 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.690 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.691 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107675956 kB' 'MemAvailable: 111407504 kB' 'Buffers: 4132 kB' 'Cached: 10636152 kB' 'SwapCached: 0 kB' 'Active: 7600344 kB' 'Inactive: 3701232 kB' 'Active(anon): 7108912 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 664552 kB' 'Mapped: 182160 kB' 'Shmem: 6447620 kB' 'KReclaimable: 579828 kB' 'Slab: 1455372 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875544 kB' 'KernelStack: 27760 kB' 'PageTables: 9164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8710740 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237868 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.692 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:06.693 nr_hugepages=1536 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:06.693 resv_hugepages=0 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:06.693 surplus_hugepages=0 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:06.693 anon_hugepages=0 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.693 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 107675592 kB' 'MemAvailable: 111407140 kB' 'Buffers: 4132 kB' 'Cached: 10636192 kB' 'SwapCached: 0 kB' 'Active: 7599452 kB' 'Inactive: 3701232 kB' 'Active(anon): 7108020 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 663596 kB' 'Mapped: 182084 kB' 'Shmem: 6447660 kB' 'KReclaimable: 579828 kB' 'Slab: 1455384 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875556 kB' 'KernelStack: 27728 kB' 'PageTables: 9148 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 69985164 kB' 'Committed_AS: 8709040 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237836 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.694 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 60523880 kB' 'MemUsed: 5135128 kB' 'SwapCached: 0 kB' 'Active: 1529120 kB' 'Inactive: 288480 kB' 'Active(anon): 1371372 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1667140 kB' 'Mapped: 58116 kB' 'AnonPages: 153540 kB' 'Shmem: 1220912 kB' 'KernelStack: 12632 kB' 'PageTables: 3780 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 745512 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 420628 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.695 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.696 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60679840 kB' 'MemFree: 47150728 kB' 'MemUsed: 13529112 kB' 'SwapCached: 0 kB' 'Active: 6070464 kB' 'Inactive: 3412752 kB' 'Active(anon): 5736780 kB' 'Inactive(anon): 0 kB' 'Active(file): 333684 kB' 'Inactive(file): 3412752 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8973232 kB' 'Mapped: 123960 kB' 'AnonPages: 510156 kB' 'Shmem: 5226796 kB' 'KernelStack: 15160 kB' 'PageTables: 5540 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 254944 kB' 'Slab: 709968 kB' 'SReclaimable: 254944 kB' 'SUnreclaim: 455024 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.697 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:06.698 node0=512 expecting 512 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:06.698 node1=1024 expecting 1024 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:06.698 00:04:06.698 real 0m3.589s 00:04:06.698 user 0m1.286s 00:04:06.698 sys 0m2.254s 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.698 20:17:58 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:06.698 ************************************ 00:04:06.698 END TEST custom_alloc 00:04:06.698 ************************************ 00:04:06.698 20:17:58 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:06.698 20:17:58 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:06.698 20:17:58 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.698 20:17:58 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.698 20:17:58 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:06.698 ************************************ 00:04:06.698 START TEST no_shrink_alloc 00:04:06.698 ************************************ 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:06.698 20:17:58 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:10.905 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:10.905 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108627768 kB' 'MemAvailable: 112359316 kB' 'Buffers: 4132 kB' 'Cached: 10636324 kB' 'SwapCached: 0 kB' 'Active: 7606192 kB' 'Inactive: 3701232 kB' 'Active(anon): 7114760 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670268 kB' 'Mapped: 183000 kB' 'Shmem: 6447792 kB' 'KReclaimable: 579828 kB' 'Slab: 1455612 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875784 kB' 'KernelStack: 27856 kB' 'PageTables: 9328 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8717876 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237888 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.905 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.906 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108627552 kB' 'MemAvailable: 112359100 kB' 'Buffers: 4132 kB' 'Cached: 10636324 kB' 'SwapCached: 0 kB' 'Active: 7606800 kB' 'Inactive: 3701232 kB' 'Active(anon): 7115368 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670904 kB' 'Mapped: 183000 kB' 'Shmem: 6447792 kB' 'KReclaimable: 579828 kB' 'Slab: 1455564 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875736 kB' 'KernelStack: 27840 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8718024 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237856 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.907 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.908 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108627300 kB' 'MemAvailable: 112358848 kB' 'Buffers: 4132 kB' 'Cached: 10636340 kB' 'SwapCached: 0 kB' 'Active: 7606132 kB' 'Inactive: 3701232 kB' 'Active(anon): 7114700 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670172 kB' 'Mapped: 182992 kB' 'Shmem: 6447808 kB' 'KReclaimable: 579828 kB' 'Slab: 1455596 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875768 kB' 'KernelStack: 27840 kB' 'PageTables: 9268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8718048 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237856 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.909 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:10.910 nr_hugepages=1024 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:10.910 resv_hugepages=0 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:10.910 surplus_hugepages=0 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:10.910 anon_hugepages=0 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.910 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108627692 kB' 'MemAvailable: 112359240 kB' 'Buffers: 4132 kB' 'Cached: 10636356 kB' 'SwapCached: 0 kB' 'Active: 7606664 kB' 'Inactive: 3701232 kB' 'Active(anon): 7115232 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670804 kB' 'Mapped: 182992 kB' 'Shmem: 6447824 kB' 'KReclaimable: 579828 kB' 'Slab: 1455596 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875768 kB' 'KernelStack: 27856 kB' 'PageTables: 9384 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8718444 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237856 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.911 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59456400 kB' 'MemUsed: 6202608 kB' 'SwapCached: 0 kB' 'Active: 1530320 kB' 'Inactive: 288480 kB' 'Active(anon): 1372572 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1667228 kB' 'Mapped: 58864 kB' 'AnonPages: 154704 kB' 'Shmem: 1221000 kB' 'KernelStack: 12568 kB' 'PageTables: 3376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 745484 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 420600 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.912 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.913 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:10.914 node0=1024 expecting 1024 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:10.914 20:18:02 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:15.127 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:04:15.127 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:04:15.127 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108632912 kB' 'MemAvailable: 112364460 kB' 'Buffers: 4132 kB' 'Cached: 10636496 kB' 'SwapCached: 0 kB' 'Active: 7607872 kB' 'Inactive: 3701232 kB' 'Active(anon): 7116440 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 671220 kB' 'Mapped: 183124 kB' 'Shmem: 6447964 kB' 'KReclaimable: 579828 kB' 'Slab: 1455700 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875872 kB' 'KernelStack: 27824 kB' 'PageTables: 9220 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8719180 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237856 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.127 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.128 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108632056 kB' 'MemAvailable: 112363604 kB' 'Buffers: 4132 kB' 'Cached: 10636500 kB' 'SwapCached: 0 kB' 'Active: 7607064 kB' 'Inactive: 3701232 kB' 'Active(anon): 7115632 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 670880 kB' 'Mapped: 183012 kB' 'Shmem: 6447968 kB' 'KReclaimable: 579828 kB' 'Slab: 1455712 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875884 kB' 'KernelStack: 27824 kB' 'PageTables: 9196 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8719196 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237872 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.129 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.130 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108632772 kB' 'MemAvailable: 112364320 kB' 'Buffers: 4132 kB' 'Cached: 10636520 kB' 'SwapCached: 0 kB' 'Active: 7607400 kB' 'Inactive: 3701232 kB' 'Active(anon): 7115968 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 671248 kB' 'Mapped: 183012 kB' 'Shmem: 6447988 kB' 'KReclaimable: 579828 kB' 'Slab: 1455704 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875876 kB' 'KernelStack: 27856 kB' 'PageTables: 9312 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8720824 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237872 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.131 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:15.132 nr_hugepages=1024 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:15.132 resv_hugepages=0 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:15.132 surplus_hugepages=0 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:15.132 anon_hugepages=0 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:15.132 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 126338848 kB' 'MemFree: 108633636 kB' 'MemAvailable: 112365184 kB' 'Buffers: 4132 kB' 'Cached: 10636540 kB' 'SwapCached: 0 kB' 'Active: 7607568 kB' 'Inactive: 3701232 kB' 'Active(anon): 7116136 kB' 'Inactive(anon): 0 kB' 'Active(file): 491432 kB' 'Inactive(file): 3701232 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 671360 kB' 'Mapped: 183012 kB' 'Shmem: 6448008 kB' 'KReclaimable: 579828 kB' 'Slab: 1455704 kB' 'SReclaimable: 579828 kB' 'SUnreclaim: 875876 kB' 'KernelStack: 27808 kB' 'PageTables: 9152 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 70509452 kB' 'Committed_AS: 8722336 kB' 'VmallocTotal: 13743895347199 kB' 'VmallocUsed: 237856 kB' 'VmallocChunk: 0 kB' 'Percpu: 145152 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 4121972 kB' 'DirectMap2M: 57423872 kB' 'DirectMap1G: 74448896 kB' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.133 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 65659008 kB' 'MemFree: 59450588 kB' 'MemUsed: 6208420 kB' 'SwapCached: 0 kB' 'Active: 1532212 kB' 'Inactive: 288480 kB' 'Active(anon): 1374464 kB' 'Inactive(anon): 0 kB' 'Active(file): 157748 kB' 'Inactive(file): 288480 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1667384 kB' 'Mapped: 58884 kB' 'AnonPages: 156476 kB' 'Shmem: 1221156 kB' 'KernelStack: 12600 kB' 'PageTables: 3488 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 324884 kB' 'Slab: 745352 kB' 'SReclaimable: 324884 kB' 'SUnreclaim: 420468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.134 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.135 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:15.136 node0=1024 expecting 1024 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:15.136 00:04:15.136 real 0m7.987s 00:04:15.136 user 0m3.118s 00:04:15.136 sys 0m5.012s 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.136 20:18:06 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:15.136 ************************************ 00:04:15.136 END TEST no_shrink_alloc 00:04:15.136 ************************************ 00:04:15.136 20:18:07 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:15.136 20:18:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:15.136 00:04:15.136 real 0m28.522s 00:04:15.136 user 0m11.138s 00:04:15.136 sys 0m17.719s 00:04:15.136 20:18:07 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:15.136 20:18:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:15.136 ************************************ 00:04:15.136 END TEST hugepages 00:04:15.136 ************************************ 00:04:15.136 20:18:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:15.136 20:18:07 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.136 20:18:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:15.136 20:18:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:15.136 20:18:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:15.136 ************************************ 00:04:15.136 START TEST driver 00:04:15.136 ************************************ 00:04:15.136 20:18:07 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:15.136 * Looking for test storage... 00:04:15.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:15.136 20:18:07 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:15.136 20:18:07 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:15.136 20:18:07 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:20.498 20:18:12 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:20.498 20:18:12 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:20.498 20:18:12 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:20.498 20:18:12 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:20.498 ************************************ 00:04:20.498 START TEST guess_driver 00:04:20.498 ************************************ 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 370 > 0 )) 00:04:20.498 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:20.499 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.499 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.499 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:20.499 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:20.499 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:20.499 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:20.499 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:20.499 Looking for driver=vfio-pci 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:20.499 20:18:12 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:23.798 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.798 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.798 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:23.798 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:23.798 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:23.798 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:24.058 20:18:16 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:29.346 00:04:29.346 real 0m9.165s 00:04:29.346 user 0m3.064s 00:04:29.346 sys 0m5.373s 00:04:29.346 20:18:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.346 20:18:21 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.346 ************************************ 00:04:29.346 END TEST guess_driver 00:04:29.346 ************************************ 00:04:29.346 20:18:21 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:29.346 00:04:29.346 real 0m14.478s 00:04:29.346 user 0m4.664s 00:04:29.346 sys 0m8.337s 00:04:29.346 20:18:21 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:29.346 20:18:21 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:29.346 ************************************ 00:04:29.346 END TEST driver 00:04:29.346 ************************************ 00:04:29.346 20:18:21 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:29.346 20:18:21 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:29.346 20:18:21 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:29.346 20:18:21 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:29.346 20:18:21 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:29.346 ************************************ 00:04:29.346 START TEST devices 00:04:29.346 ************************************ 00:04:29.346 20:18:21 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:29.607 * Looking for test storage... 00:04:29.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:29.607 20:18:21 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:29.607 20:18:21 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:29.607 20:18:21 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:29.607 20:18:21 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:65:00.0 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\6\5\:\0\0\.\0* ]] 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:04:33.817 20:18:25 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:04:33.817 No valid GPT data, bailing 00:04:33.817 20:18:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:04:33.817 20:18:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:04:33.817 20:18:25 setup.sh.devices -- setup/common.sh@80 -- # echo 1920383410176 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 1920383410176 >= min_disk_size )) 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:65:00.0 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:04:33.817 20:18:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.817 20:18:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:33.817 ************************************ 00:04:33.817 START TEST nvme_mount 00:04:33.817 ************************************ 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:33.817 20:18:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:04:34.761 Creating new GPT entries in memory. 00:04:34.761 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:34.761 other utilities. 00:04:34.761 20:18:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:34.761 20:18:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:34.761 20:18:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:34.761 20:18:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:34.761 20:18:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:35.706 Creating new GPT entries in memory. 00:04:35.706 The operation has completed successfully. 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 1086045 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:04:35.706 20:18:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:65:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:35.706 20:18:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:39.918 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:39.918 20:18:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:39.918 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:39.918 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:39.918 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:39.918 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:65:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:39.918 20:18:32 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:65:00.0 data@nvme0n1 '' '' 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:44.129 20:18:35 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.433 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:47.434 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:47.434 00:04:47.434 real 0m13.818s 00:04:47.434 user 0m4.472s 00:04:47.434 sys 0m7.262s 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.434 20:18:39 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:47.434 ************************************ 00:04:47.434 END TEST nvme_mount 00:04:47.434 ************************************ 00:04:47.434 20:18:39 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:47.434 20:18:39 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:47.434 20:18:39 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.434 20:18:39 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.434 20:18:39 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:47.434 ************************************ 00:04:47.434 START TEST dm_mount 00:04:47.434 ************************************ 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:47.434 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:47.696 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:47.696 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:47.696 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:47.696 20:18:39 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:48.639 Creating new GPT entries in memory. 00:04:48.639 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:48.639 other utilities. 00:04:48.639 20:18:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:48.639 20:18:40 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:48.639 20:18:40 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:48.639 20:18:40 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:48.639 20:18:40 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:49.580 Creating new GPT entries in memory. 00:04:49.580 The operation has completed successfully. 00:04:49.580 20:18:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:49.580 20:18:41 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:49.580 20:18:41 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:49.580 20:18:41 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:49.580 20:18:41 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:50.523 The operation has completed successfully. 00:04:50.523 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:50.523 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:50.523 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 1091595 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-1 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-1 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-1 ]] 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-1 ]] 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:65:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.784 20:18:42 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:65:00.0 holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 '' '' 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:65:00.0 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:65:00.0 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.979 20:18:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:65:00.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-1,holder@nvme0n1p2:dm-1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\1\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\1* ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.6 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.7 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.4 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.5 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.2 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.3 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.0 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:01.1 == \0\0\0\0\:\6\5\:\0\0\.\0 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:58.277 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:58.277 20:18:50 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:58.537 00:04:58.537 real 0m10.853s 00:04:58.537 user 0m2.862s 00:04:58.537 sys 0m5.065s 00:04:58.537 20:18:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.537 20:18:50 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:58.537 ************************************ 00:04:58.537 END TEST dm_mount 00:04:58.537 ************************************ 00:04:58.537 20:18:50 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:58.537 20:18:50 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:58.537 20:18:50 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:58.537 20:18:50 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:58.537 20:18:50 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.537 20:18:50 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:58.537 20:18:50 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.537 20:18:50 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:58.797 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:58.797 /dev/nvme0n1: 8 bytes were erased at offset 0x1bf1fc55e00 (gpt): 45 46 49 20 50 41 52 54 00:04:58.797 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:58.797 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:58.797 20:18:50 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:58.797 20:18:50 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:58.797 20:18:50 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:58.797 20:18:50 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:58.797 20:18:50 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:58.797 20:18:50 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:58.797 20:18:50 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:58.797 00:04:58.797 real 0m29.322s 00:04:58.797 user 0m8.937s 00:04:58.797 sys 0m15.231s 00:04:58.797 20:18:50 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.798 20:18:50 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:58.798 ************************************ 00:04:58.798 END TEST devices 00:04:58.798 ************************************ 00:04:58.798 20:18:51 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:58.798 00:04:58.798 real 1m39.563s 00:04:58.798 user 0m33.754s 00:04:58.798 sys 0m57.341s 00:04:58.798 20:18:51 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:58.798 20:18:51 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:58.798 ************************************ 00:04:58.798 END TEST setup.sh 00:04:58.798 ************************************ 00:04:58.798 20:18:51 -- common/autotest_common.sh@1142 -- # return 0 00:04:58.798 20:18:51 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:03.000 Hugepages 00:05:03.001 node hugesize free / total 00:05:03.001 node0 1048576kB 0 / 0 00:05:03.001 node0 2048kB 2048 / 2048 00:05:03.001 node1 1048576kB 0 / 0 00:05:03.001 node1 2048kB 0 / 0 00:05:03.001 00:05:03.001 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:03.001 I/OAT 0000:00:01.0 8086 0b00 0 ioatdma - - 00:05:03.001 I/OAT 0000:00:01.1 8086 0b00 0 ioatdma - - 00:05:03.001 I/OAT 0000:00:01.2 8086 0b00 0 ioatdma - - 00:05:03.001 I/OAT 0000:00:01.3 8086 0b00 0 ioatdma - - 00:05:03.001 I/OAT 0000:00:01.4 8086 0b00 0 ioatdma - - 00:05:03.001 I/OAT 0000:00:01.5 8086 0b00 0 ioatdma - - 00:05:03.001 I/OAT 0000:00:01.6 8086 0b00 0 ioatdma - - 00:05:03.001 I/OAT 0000:00:01.7 8086 0b00 0 ioatdma - - 00:05:03.001 NVMe 0000:65:00.0 144d a80a 0 nvme nvme0 nvme0n1 00:05:03.001 I/OAT 0000:80:01.0 8086 0b00 1 ioatdma - - 00:05:03.001 I/OAT 0000:80:01.1 8086 0b00 1 ioatdma - - 00:05:03.001 I/OAT 0000:80:01.2 8086 0b00 1 ioatdma - - 00:05:03.001 I/OAT 0000:80:01.3 8086 0b00 1 ioatdma - - 00:05:03.001 I/OAT 0000:80:01.4 8086 0b00 1 ioatdma - - 00:05:03.001 I/OAT 0000:80:01.5 8086 0b00 1 ioatdma - - 00:05:03.001 I/OAT 0000:80:01.6 8086 0b00 1 ioatdma - - 00:05:03.001 I/OAT 0000:80:01.7 8086 0b00 1 ioatdma - - 00:05:03.001 20:18:54 -- spdk/autotest.sh@130 -- # uname -s 00:05:03.001 20:18:54 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:03.001 20:18:54 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:03.001 20:18:54 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:06.320 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:06.320 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:08.295 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:08.295 20:19:00 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:09.235 20:19:01 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:09.235 20:19:01 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:09.235 20:19:01 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:09.235 20:19:01 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:09.235 20:19:01 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:09.235 20:19:01 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:09.235 20:19:01 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:09.235 20:19:01 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:09.235 20:19:01 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:09.235 20:19:01 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:09.235 20:19:01 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:09.235 20:19:01 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:13.441 Waiting for block devices as requested 00:05:13.441 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:13.441 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:13.441 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:13.441 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:13.441 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:13.441 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:13.441 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:13.702 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:13.702 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:05:13.702 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:05:13.962 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:05:13.962 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:05:13.962 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:05:13.962 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:05:14.223 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:05:14.223 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:05:14.223 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:05:14.223 20:19:06 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:14.223 20:19:06 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:65:00.0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1502 -- # grep 0000:65:00.0/nvme/nvme 00:05:14.223 20:19:06 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 ]] 00:05:14.223 20:19:06 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:64/0000:64:02.0/0000:65:00.0/nvme/nvme0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:14.223 20:19:06 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:14.223 20:19:06 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:14.223 20:19:06 -- common/autotest_common.sh@1545 -- # oacs=' 0x5f' 00:05:14.223 20:19:06 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:14.223 20:19:06 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:14.223 20:19:06 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:14.223 20:19:06 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:14.223 20:19:06 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:14.223 20:19:06 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:14.223 20:19:06 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:14.223 20:19:06 -- common/autotest_common.sh@1557 -- # continue 00:05:14.223 20:19:06 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:14.223 20:19:06 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:14.223 20:19:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.483 20:19:06 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:14.483 20:19:06 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:14.483 20:19:06 -- common/autotest_common.sh@10 -- # set +x 00:05:14.483 20:19:06 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:18.684 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:05:18.684 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:05:18.684 20:19:10 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:18.684 20:19:10 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:18.684 20:19:10 -- common/autotest_common.sh@10 -- # set +x 00:05:18.684 20:19:10 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:18.684 20:19:10 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:18.684 20:19:10 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:18.684 20:19:10 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:18.684 20:19:10 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:18.684 20:19:10 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:18.684 20:19:10 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:18.684 20:19:10 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:18.684 20:19:10 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:18.684 20:19:10 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:18.684 20:19:10 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:18.684 20:19:10 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:18.684 20:19:10 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:05:18.684 20:19:10 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:18.684 20:19:10 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:65:00.0/device 00:05:18.684 20:19:10 -- common/autotest_common.sh@1580 -- # device=0xa80a 00:05:18.684 20:19:10 -- common/autotest_common.sh@1581 -- # [[ 0xa80a == \0\x\0\a\5\4 ]] 00:05:18.684 20:19:10 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:05:18.684 20:19:10 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:05:18.684 20:19:10 -- common/autotest_common.sh@1593 -- # return 0 00:05:18.684 20:19:10 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:18.684 20:19:10 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:18.684 20:19:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:18.684 20:19:10 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:18.684 20:19:10 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:18.684 20:19:10 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:18.684 20:19:10 -- common/autotest_common.sh@10 -- # set +x 00:05:18.684 20:19:10 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:18.684 20:19:10 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:18.684 20:19:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.684 20:19:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.684 20:19:10 -- common/autotest_common.sh@10 -- # set +x 00:05:18.684 ************************************ 00:05:18.684 START TEST env 00:05:18.684 ************************************ 00:05:18.684 20:19:10 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:18.684 * Looking for test storage... 00:05:18.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:18.684 20:19:10 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.684 20:19:10 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.684 20:19:10 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.684 20:19:10 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.684 ************************************ 00:05:18.684 START TEST env_memory 00:05:18.684 ************************************ 00:05:18.684 20:19:10 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:18.684 00:05:18.684 00:05:18.684 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.684 http://cunit.sourceforge.net/ 00:05:18.684 00:05:18.684 00:05:18.684 Suite: memory 00:05:18.684 Test: alloc and free memory map ...[2024-07-15 20:19:10.856888] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:18.684 passed 00:05:18.684 Test: mem map translation ...[2024-07-15 20:19:10.882643] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:18.684 [2024-07-15 20:19:10.882680] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:18.684 [2024-07-15 20:19:10.882728] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:18.684 [2024-07-15 20:19:10.882736] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:18.684 passed 00:05:18.684 Test: mem map registration ...[2024-07-15 20:19:10.938209] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:18.684 [2024-07-15 20:19:10.938241] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:18.684 passed 00:05:18.684 Test: mem map adjacent registrations ...passed 00:05:18.684 00:05:18.684 Run Summary: Type Total Ran Passed Failed Inactive 00:05:18.684 suites 1 1 n/a 0 0 00:05:18.684 tests 4 4 4 0 0 00:05:18.684 asserts 152 152 152 0 n/a 00:05:18.684 00:05:18.684 Elapsed time = 0.192 seconds 00:05:18.684 00:05:18.684 real 0m0.207s 00:05:18.684 user 0m0.192s 00:05:18.684 sys 0m0.015s 00:05:18.684 20:19:11 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.684 20:19:11 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:18.684 ************************************ 00:05:18.684 END TEST env_memory 00:05:18.684 ************************************ 00:05:18.684 20:19:11 env -- common/autotest_common.sh@1142 -- # return 0 00:05:18.684 20:19:11 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.684 20:19:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.684 20:19:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.684 20:19:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:18.946 ************************************ 00:05:18.946 START TEST env_vtophys 00:05:18.946 ************************************ 00:05:18.946 20:19:11 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:18.946 EAL: lib.eal log level changed from notice to debug 00:05:18.946 EAL: Detected lcore 0 as core 0 on socket 0 00:05:18.946 EAL: Detected lcore 1 as core 1 on socket 0 00:05:18.946 EAL: Detected lcore 2 as core 2 on socket 0 00:05:18.946 EAL: Detected lcore 3 as core 3 on socket 0 00:05:18.946 EAL: Detected lcore 4 as core 4 on socket 0 00:05:18.946 EAL: Detected lcore 5 as core 5 on socket 0 00:05:18.946 EAL: Detected lcore 6 as core 6 on socket 0 00:05:18.946 EAL: Detected lcore 7 as core 7 on socket 0 00:05:18.946 EAL: Detected lcore 8 as core 8 on socket 0 00:05:18.946 EAL: Detected lcore 9 as core 9 on socket 0 00:05:18.946 EAL: Detected lcore 10 as core 10 on socket 0 00:05:18.946 EAL: Detected lcore 11 as core 11 on socket 0 00:05:18.946 EAL: Detected lcore 12 as core 12 on socket 0 00:05:18.946 EAL: Detected lcore 13 as core 13 on socket 0 00:05:18.946 EAL: Detected lcore 14 as core 14 on socket 0 00:05:18.946 EAL: Detected lcore 15 as core 15 on socket 0 00:05:18.946 EAL: Detected lcore 16 as core 16 on socket 0 00:05:18.946 EAL: Detected lcore 17 as core 17 on socket 0 00:05:18.946 EAL: Detected lcore 18 as core 18 on socket 0 00:05:18.946 EAL: Detected lcore 19 as core 19 on socket 0 00:05:18.946 EAL: Detected lcore 20 as core 20 on socket 0 00:05:18.946 EAL: Detected lcore 21 as core 21 on socket 0 00:05:18.946 EAL: Detected lcore 22 as core 22 on socket 0 00:05:18.946 EAL: Detected lcore 23 as core 23 on socket 0 00:05:18.946 EAL: Detected lcore 24 as core 24 on socket 0 00:05:18.946 EAL: Detected lcore 25 as core 25 on socket 0 00:05:18.946 EAL: Detected lcore 26 as core 26 on socket 0 00:05:18.946 EAL: Detected lcore 27 as core 27 on socket 0 00:05:18.946 EAL: Detected lcore 28 as core 28 on socket 0 00:05:18.946 EAL: Detected lcore 29 as core 29 on socket 0 00:05:18.946 EAL: Detected lcore 30 as core 30 on socket 0 00:05:18.946 EAL: Detected lcore 31 as core 31 on socket 0 00:05:18.946 EAL: Detected lcore 32 as core 32 on socket 0 00:05:18.946 EAL: Detected lcore 33 as core 33 on socket 0 00:05:18.946 EAL: Detected lcore 34 as core 34 on socket 0 00:05:18.946 EAL: Detected lcore 35 as core 35 on socket 0 00:05:18.946 EAL: Detected lcore 36 as core 0 on socket 1 00:05:18.946 EAL: Detected lcore 37 as core 1 on socket 1 00:05:18.946 EAL: Detected lcore 38 as core 2 on socket 1 00:05:18.946 EAL: Detected lcore 39 as core 3 on socket 1 00:05:18.946 EAL: Detected lcore 40 as core 4 on socket 1 00:05:18.946 EAL: Detected lcore 41 as core 5 on socket 1 00:05:18.946 EAL: Detected lcore 42 as core 6 on socket 1 00:05:18.946 EAL: Detected lcore 43 as core 7 on socket 1 00:05:18.946 EAL: Detected lcore 44 as core 8 on socket 1 00:05:18.946 EAL: Detected lcore 45 as core 9 on socket 1 00:05:18.946 EAL: Detected lcore 46 as core 10 on socket 1 00:05:18.946 EAL: Detected lcore 47 as core 11 on socket 1 00:05:18.946 EAL: Detected lcore 48 as core 12 on socket 1 00:05:18.946 EAL: Detected lcore 49 as core 13 on socket 1 00:05:18.946 EAL: Detected lcore 50 as core 14 on socket 1 00:05:18.946 EAL: Detected lcore 51 as core 15 on socket 1 00:05:18.946 EAL: Detected lcore 52 as core 16 on socket 1 00:05:18.946 EAL: Detected lcore 53 as core 17 on socket 1 00:05:18.946 EAL: Detected lcore 54 as core 18 on socket 1 00:05:18.946 EAL: Detected lcore 55 as core 19 on socket 1 00:05:18.946 EAL: Detected lcore 56 as core 20 on socket 1 00:05:18.946 EAL: Detected lcore 57 as core 21 on socket 1 00:05:18.946 EAL: Detected lcore 58 as core 22 on socket 1 00:05:18.946 EAL: Detected lcore 59 as core 23 on socket 1 00:05:18.946 EAL: Detected lcore 60 as core 24 on socket 1 00:05:18.946 EAL: Detected lcore 61 as core 25 on socket 1 00:05:18.946 EAL: Detected lcore 62 as core 26 on socket 1 00:05:18.946 EAL: Detected lcore 63 as core 27 on socket 1 00:05:18.946 EAL: Detected lcore 64 as core 28 on socket 1 00:05:18.946 EAL: Detected lcore 65 as core 29 on socket 1 00:05:18.946 EAL: Detected lcore 66 as core 30 on socket 1 00:05:18.946 EAL: Detected lcore 67 as core 31 on socket 1 00:05:18.946 EAL: Detected lcore 68 as core 32 on socket 1 00:05:18.946 EAL: Detected lcore 69 as core 33 on socket 1 00:05:18.946 EAL: Detected lcore 70 as core 34 on socket 1 00:05:18.946 EAL: Detected lcore 71 as core 35 on socket 1 00:05:18.946 EAL: Detected lcore 72 as core 0 on socket 0 00:05:18.946 EAL: Detected lcore 73 as core 1 on socket 0 00:05:18.946 EAL: Detected lcore 74 as core 2 on socket 0 00:05:18.946 EAL: Detected lcore 75 as core 3 on socket 0 00:05:18.946 EAL: Detected lcore 76 as core 4 on socket 0 00:05:18.946 EAL: Detected lcore 77 as core 5 on socket 0 00:05:18.946 EAL: Detected lcore 78 as core 6 on socket 0 00:05:18.946 EAL: Detected lcore 79 as core 7 on socket 0 00:05:18.946 EAL: Detected lcore 80 as core 8 on socket 0 00:05:18.946 EAL: Detected lcore 81 as core 9 on socket 0 00:05:18.946 EAL: Detected lcore 82 as core 10 on socket 0 00:05:18.946 EAL: Detected lcore 83 as core 11 on socket 0 00:05:18.946 EAL: Detected lcore 84 as core 12 on socket 0 00:05:18.946 EAL: Detected lcore 85 as core 13 on socket 0 00:05:18.946 EAL: Detected lcore 86 as core 14 on socket 0 00:05:18.946 EAL: Detected lcore 87 as core 15 on socket 0 00:05:18.946 EAL: Detected lcore 88 as core 16 on socket 0 00:05:18.946 EAL: Detected lcore 89 as core 17 on socket 0 00:05:18.946 EAL: Detected lcore 90 as core 18 on socket 0 00:05:18.946 EAL: Detected lcore 91 as core 19 on socket 0 00:05:18.946 EAL: Detected lcore 92 as core 20 on socket 0 00:05:18.946 EAL: Detected lcore 93 as core 21 on socket 0 00:05:18.946 EAL: Detected lcore 94 as core 22 on socket 0 00:05:18.946 EAL: Detected lcore 95 as core 23 on socket 0 00:05:18.946 EAL: Detected lcore 96 as core 24 on socket 0 00:05:18.946 EAL: Detected lcore 97 as core 25 on socket 0 00:05:18.946 EAL: Detected lcore 98 as core 26 on socket 0 00:05:18.946 EAL: Detected lcore 99 as core 27 on socket 0 00:05:18.946 EAL: Detected lcore 100 as core 28 on socket 0 00:05:18.946 EAL: Detected lcore 101 as core 29 on socket 0 00:05:18.946 EAL: Detected lcore 102 as core 30 on socket 0 00:05:18.946 EAL: Detected lcore 103 as core 31 on socket 0 00:05:18.946 EAL: Detected lcore 104 as core 32 on socket 0 00:05:18.946 EAL: Detected lcore 105 as core 33 on socket 0 00:05:18.946 EAL: Detected lcore 106 as core 34 on socket 0 00:05:18.946 EAL: Detected lcore 107 as core 35 on socket 0 00:05:18.946 EAL: Detected lcore 108 as core 0 on socket 1 00:05:18.946 EAL: Detected lcore 109 as core 1 on socket 1 00:05:18.946 EAL: Detected lcore 110 as core 2 on socket 1 00:05:18.946 EAL: Detected lcore 111 as core 3 on socket 1 00:05:18.946 EAL: Detected lcore 112 as core 4 on socket 1 00:05:18.946 EAL: Detected lcore 113 as core 5 on socket 1 00:05:18.946 EAL: Detected lcore 114 as core 6 on socket 1 00:05:18.946 EAL: Detected lcore 115 as core 7 on socket 1 00:05:18.946 EAL: Detected lcore 116 as core 8 on socket 1 00:05:18.946 EAL: Detected lcore 117 as core 9 on socket 1 00:05:18.946 EAL: Detected lcore 118 as core 10 on socket 1 00:05:18.946 EAL: Detected lcore 119 as core 11 on socket 1 00:05:18.946 EAL: Detected lcore 120 as core 12 on socket 1 00:05:18.946 EAL: Detected lcore 121 as core 13 on socket 1 00:05:18.946 EAL: Detected lcore 122 as core 14 on socket 1 00:05:18.946 EAL: Detected lcore 123 as core 15 on socket 1 00:05:18.946 EAL: Detected lcore 124 as core 16 on socket 1 00:05:18.946 EAL: Detected lcore 125 as core 17 on socket 1 00:05:18.946 EAL: Detected lcore 126 as core 18 on socket 1 00:05:18.946 EAL: Detected lcore 127 as core 19 on socket 1 00:05:18.946 EAL: Skipped lcore 128 as core 20 on socket 1 00:05:18.946 EAL: Skipped lcore 129 as core 21 on socket 1 00:05:18.946 EAL: Skipped lcore 130 as core 22 on socket 1 00:05:18.946 EAL: Skipped lcore 131 as core 23 on socket 1 00:05:18.946 EAL: Skipped lcore 132 as core 24 on socket 1 00:05:18.946 EAL: Skipped lcore 133 as core 25 on socket 1 00:05:18.946 EAL: Skipped lcore 134 as core 26 on socket 1 00:05:18.946 EAL: Skipped lcore 135 as core 27 on socket 1 00:05:18.946 EAL: Skipped lcore 136 as core 28 on socket 1 00:05:18.946 EAL: Skipped lcore 137 as core 29 on socket 1 00:05:18.946 EAL: Skipped lcore 138 as core 30 on socket 1 00:05:18.946 EAL: Skipped lcore 139 as core 31 on socket 1 00:05:18.946 EAL: Skipped lcore 140 as core 32 on socket 1 00:05:18.946 EAL: Skipped lcore 141 as core 33 on socket 1 00:05:18.946 EAL: Skipped lcore 142 as core 34 on socket 1 00:05:18.946 EAL: Skipped lcore 143 as core 35 on socket 1 00:05:18.946 EAL: Maximum logical cores by configuration: 128 00:05:18.946 EAL: Detected CPU lcores: 128 00:05:18.946 EAL: Detected NUMA nodes: 2 00:05:18.946 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:18.946 EAL: Detected shared linkage of DPDK 00:05:18.946 EAL: No shared files mode enabled, IPC will be disabled 00:05:18.947 EAL: Bus pci wants IOVA as 'DC' 00:05:18.947 EAL: Buses did not request a specific IOVA mode. 00:05:18.947 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:18.947 EAL: Selected IOVA mode 'VA' 00:05:18.947 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.947 EAL: Probing VFIO support... 00:05:18.947 EAL: IOMMU type 1 (Type 1) is supported 00:05:18.947 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:18.947 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:18.947 EAL: VFIO support initialized 00:05:18.947 EAL: Ask a virtual area of 0x2e000 bytes 00:05:18.947 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:18.947 EAL: Setting up physically contiguous memory... 00:05:18.947 EAL: Setting maximum number of open files to 524288 00:05:18.947 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:18.947 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:18.947 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:18.947 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:18.947 EAL: Ask a virtual area of 0x61000 bytes 00:05:18.947 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:18.947 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:18.947 EAL: Ask a virtual area of 0x400000000 bytes 00:05:18.947 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:18.947 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:18.947 EAL: Hugepages will be freed exactly as allocated. 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: TSC frequency is ~2400000 KHz 00:05:18.947 EAL: Main lcore 0 is ready (tid=7f266466aa00;cpuset=[0]) 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 0 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 2MB 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:18.947 EAL: Mem event callback 'spdk:(nil)' registered 00:05:18.947 00:05:18.947 00:05:18.947 CUnit - A unit testing framework for C - Version 2.1-3 00:05:18.947 http://cunit.sourceforge.net/ 00:05:18.947 00:05:18.947 00:05:18.947 Suite: components_suite 00:05:18.947 Test: vtophys_malloc_test ...passed 00:05:18.947 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 4MB 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was shrunk by 4MB 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 6MB 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was shrunk by 6MB 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 10MB 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was shrunk by 10MB 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 18MB 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was shrunk by 18MB 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 34MB 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was shrunk by 34MB 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 66MB 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was shrunk by 66MB 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was expanded by 130MB 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.947 EAL: request: mp_malloc_sync 00:05:18.947 EAL: No shared files mode enabled, IPC is disabled 00:05:18.947 EAL: Heap on socket 0 was shrunk by 130MB 00:05:18.947 EAL: Trying to obtain current memory policy. 00:05:18.947 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:18.947 EAL: Restoring previous memory policy: 4 00:05:18.947 EAL: Calling mem event callback 'spdk:(nil)' 00:05:18.948 EAL: request: mp_malloc_sync 00:05:18.948 EAL: No shared files mode enabled, IPC is disabled 00:05:18.948 EAL: Heap on socket 0 was expanded by 258MB 00:05:18.948 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.208 EAL: request: mp_malloc_sync 00:05:19.208 EAL: No shared files mode enabled, IPC is disabled 00:05:19.208 EAL: Heap on socket 0 was shrunk by 258MB 00:05:19.208 EAL: Trying to obtain current memory policy. 00:05:19.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.208 EAL: Restoring previous memory policy: 4 00:05:19.208 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.208 EAL: request: mp_malloc_sync 00:05:19.208 EAL: No shared files mode enabled, IPC is disabled 00:05:19.208 EAL: Heap on socket 0 was expanded by 514MB 00:05:19.208 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.208 EAL: request: mp_malloc_sync 00:05:19.208 EAL: No shared files mode enabled, IPC is disabled 00:05:19.208 EAL: Heap on socket 0 was shrunk by 514MB 00:05:19.208 EAL: Trying to obtain current memory policy. 00:05:19.208 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:19.468 EAL: Restoring previous memory policy: 4 00:05:19.468 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.468 EAL: request: mp_malloc_sync 00:05:19.468 EAL: No shared files mode enabled, IPC is disabled 00:05:19.468 EAL: Heap on socket 0 was expanded by 1026MB 00:05:19.468 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.729 EAL: request: mp_malloc_sync 00:05:19.729 EAL: No shared files mode enabled, IPC is disabled 00:05:19.729 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:19.729 passed 00:05:19.729 00:05:19.729 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.729 suites 1 1 n/a 0 0 00:05:19.729 tests 2 2 2 0 0 00:05:19.729 asserts 497 497 497 0 n/a 00:05:19.729 00:05:19.729 Elapsed time = 0.645 seconds 00:05:19.729 EAL: Calling mem event callback 'spdk:(nil)' 00:05:19.729 EAL: request: mp_malloc_sync 00:05:19.729 EAL: No shared files mode enabled, IPC is disabled 00:05:19.729 EAL: Heap on socket 0 was shrunk by 2MB 00:05:19.729 EAL: No shared files mode enabled, IPC is disabled 00:05:19.729 EAL: No shared files mode enabled, IPC is disabled 00:05:19.729 EAL: No shared files mode enabled, IPC is disabled 00:05:19.729 00:05:19.729 real 0m0.772s 00:05:19.729 user 0m0.402s 00:05:19.729 sys 0m0.344s 00:05:19.729 20:19:11 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.729 20:19:11 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:19.729 ************************************ 00:05:19.729 END TEST env_vtophys 00:05:19.729 ************************************ 00:05:19.729 20:19:11 env -- common/autotest_common.sh@1142 -- # return 0 00:05:19.729 20:19:11 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:19.729 20:19:11 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:19.729 20:19:11 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.729 20:19:11 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.729 ************************************ 00:05:19.729 START TEST env_pci 00:05:19.729 ************************************ 00:05:19.729 20:19:11 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:19.729 00:05:19.729 00:05:19.729 CUnit - A unit testing framework for C - Version 2.1-3 00:05:19.729 http://cunit.sourceforge.net/ 00:05:19.729 00:05:19.729 00:05:19.729 Suite: pci 00:05:19.729 Test: pci_hook ...[2024-07-15 20:19:11.963455] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1103711 has claimed it 00:05:19.729 EAL: Cannot find device (10000:00:01.0) 00:05:19.729 EAL: Failed to attach device on primary process 00:05:19.729 passed 00:05:19.729 00:05:19.729 Run Summary: Type Total Ran Passed Failed Inactive 00:05:19.729 suites 1 1 n/a 0 0 00:05:19.729 tests 1 1 1 0 0 00:05:19.729 asserts 25 25 25 0 n/a 00:05:19.729 00:05:19.729 Elapsed time = 0.032 seconds 00:05:19.729 00:05:19.729 real 0m0.052s 00:05:19.729 user 0m0.015s 00:05:19.729 sys 0m0.037s 00:05:19.729 20:19:11 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.729 20:19:11 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:19.729 ************************************ 00:05:19.729 END TEST env_pci 00:05:19.729 ************************************ 00:05:19.729 20:19:12 env -- common/autotest_common.sh@1142 -- # return 0 00:05:19.729 20:19:12 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:19.729 20:19:12 env -- env/env.sh@15 -- # uname 00:05:19.729 20:19:12 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:19.729 20:19:12 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:19.729 20:19:12 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.729 20:19:12 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:19.729 20:19:12 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.729 20:19:12 env -- common/autotest_common.sh@10 -- # set +x 00:05:19.729 ************************************ 00:05:19.729 START TEST env_dpdk_post_init 00:05:19.729 ************************************ 00:05:19.729 20:19:12 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:19.990 EAL: Detected CPU lcores: 128 00:05:19.990 EAL: Detected NUMA nodes: 2 00:05:19.990 EAL: Detected shared linkage of DPDK 00:05:19.990 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:19.990 EAL: Selected IOVA mode 'VA' 00:05:19.990 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.990 EAL: VFIO support initialized 00:05:19.990 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:19.990 EAL: Using IOMMU type 1 (Type 1) 00:05:19.990 EAL: Ignore mapping IO port bar(1) 00:05:20.250 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.0 (socket 0) 00:05:20.250 EAL: Ignore mapping IO port bar(1) 00:05:20.510 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.1 (socket 0) 00:05:20.510 EAL: Ignore mapping IO port bar(1) 00:05:20.510 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.2 (socket 0) 00:05:20.771 EAL: Ignore mapping IO port bar(1) 00:05:20.771 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.3 (socket 0) 00:05:21.032 EAL: Ignore mapping IO port bar(1) 00:05:21.032 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.4 (socket 0) 00:05:21.291 EAL: Ignore mapping IO port bar(1) 00:05:21.291 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.5 (socket 0) 00:05:21.291 EAL: Ignore mapping IO port bar(1) 00:05:21.551 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.6 (socket 0) 00:05:21.551 EAL: Ignore mapping IO port bar(1) 00:05:21.811 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:00:01.7 (socket 0) 00:05:22.072 EAL: Probe PCI driver: spdk_nvme (144d:a80a) device: 0000:65:00.0 (socket 0) 00:05:22.072 EAL: Ignore mapping IO port bar(1) 00:05:22.072 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.0 (socket 1) 00:05:22.331 EAL: Ignore mapping IO port bar(1) 00:05:22.331 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.1 (socket 1) 00:05:22.591 EAL: Ignore mapping IO port bar(1) 00:05:22.591 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.2 (socket 1) 00:05:22.850 EAL: Ignore mapping IO port bar(1) 00:05:22.850 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.3 (socket 1) 00:05:23.110 EAL: Ignore mapping IO port bar(1) 00:05:23.110 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.4 (socket 1) 00:05:23.110 EAL: Ignore mapping IO port bar(1) 00:05:23.371 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.5 (socket 1) 00:05:23.371 EAL: Ignore mapping IO port bar(1) 00:05:23.631 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.6 (socket 1) 00:05:23.631 EAL: Ignore mapping IO port bar(1) 00:05:23.890 EAL: Probe PCI driver: spdk_ioat (8086:0b00) device: 0000:80:01.7 (socket 1) 00:05:23.890 EAL: Releasing PCI mapped resource for 0000:65:00.0 00:05:23.890 EAL: Calling pci_unmap_resource for 0000:65:00.0 at 0x202001020000 00:05:23.890 Starting DPDK initialization... 00:05:23.890 Starting SPDK post initialization... 00:05:23.890 SPDK NVMe probe 00:05:23.890 Attaching to 0000:65:00.0 00:05:23.890 Attached to 0000:65:00.0 00:05:23.890 Cleaning up... 00:05:25.799 00:05:25.799 real 0m5.727s 00:05:25.799 user 0m0.183s 00:05:25.799 sys 0m0.086s 00:05:25.799 20:19:17 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.799 20:19:17 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:25.799 ************************************ 00:05:25.799 END TEST env_dpdk_post_init 00:05:25.799 ************************************ 00:05:25.799 20:19:17 env -- common/autotest_common.sh@1142 -- # return 0 00:05:25.799 20:19:17 env -- env/env.sh@26 -- # uname 00:05:25.799 20:19:17 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:25.799 20:19:17 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.799 20:19:17 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.799 20:19:17 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.799 20:19:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.799 ************************************ 00:05:25.799 START TEST env_mem_callbacks 00:05:25.799 ************************************ 00:05:25.799 20:19:17 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:25.799 EAL: Detected CPU lcores: 128 00:05:25.799 EAL: Detected NUMA nodes: 2 00:05:25.799 EAL: Detected shared linkage of DPDK 00:05:25.799 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:25.799 EAL: Selected IOVA mode 'VA' 00:05:25.799 EAL: No free 2048 kB hugepages reported on node 1 00:05:25.799 EAL: VFIO support initialized 00:05:25.799 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:25.799 00:05:25.799 00:05:25.799 CUnit - A unit testing framework for C - Version 2.1-3 00:05:25.799 http://cunit.sourceforge.net/ 00:05:25.799 00:05:25.799 00:05:25.799 Suite: memory 00:05:25.799 Test: test ... 00:05:25.799 register 0x200000200000 2097152 00:05:25.799 malloc 3145728 00:05:25.799 register 0x200000400000 4194304 00:05:25.799 buf 0x200000500000 len 3145728 PASSED 00:05:25.799 malloc 64 00:05:25.799 buf 0x2000004fff40 len 64 PASSED 00:05:25.799 malloc 4194304 00:05:25.799 register 0x200000800000 6291456 00:05:25.799 buf 0x200000a00000 len 4194304 PASSED 00:05:25.799 free 0x200000500000 3145728 00:05:25.799 free 0x2000004fff40 64 00:05:25.799 unregister 0x200000400000 4194304 PASSED 00:05:25.799 free 0x200000a00000 4194304 00:05:25.799 unregister 0x200000800000 6291456 PASSED 00:05:25.799 malloc 8388608 00:05:25.799 register 0x200000400000 10485760 00:05:25.799 buf 0x200000600000 len 8388608 PASSED 00:05:25.799 free 0x200000600000 8388608 00:05:25.799 unregister 0x200000400000 10485760 PASSED 00:05:25.799 passed 00:05:25.799 00:05:25.799 Run Summary: Type Total Ran Passed Failed Inactive 00:05:25.799 suites 1 1 n/a 0 0 00:05:25.799 tests 1 1 1 0 0 00:05:25.799 asserts 15 15 15 0 n/a 00:05:25.799 00:05:25.799 Elapsed time = 0.005 seconds 00:05:25.799 00:05:25.799 real 0m0.062s 00:05:25.799 user 0m0.025s 00:05:25.799 sys 0m0.037s 00:05:25.799 20:19:17 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.799 20:19:17 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:25.799 ************************************ 00:05:25.799 END TEST env_mem_callbacks 00:05:25.799 ************************************ 00:05:25.799 20:19:17 env -- common/autotest_common.sh@1142 -- # return 0 00:05:25.799 00:05:25.799 real 0m7.327s 00:05:25.799 user 0m1.023s 00:05:25.799 sys 0m0.844s 00:05:25.799 20:19:17 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.799 20:19:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:25.799 ************************************ 00:05:25.799 END TEST env 00:05:25.799 ************************************ 00:05:25.799 20:19:18 -- common/autotest_common.sh@1142 -- # return 0 00:05:25.799 20:19:18 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.799 20:19:18 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:25.799 20:19:18 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.799 20:19:18 -- common/autotest_common.sh@10 -- # set +x 00:05:25.799 ************************************ 00:05:25.799 START TEST rpc 00:05:25.799 ************************************ 00:05:25.799 20:19:18 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:25.799 * Looking for test storage... 00:05:25.799 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:25.799 20:19:18 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1105156 00:05:25.799 20:19:18 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:25.799 20:19:18 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:25.799 20:19:18 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1105156 00:05:25.799 20:19:18 rpc -- common/autotest_common.sh@829 -- # '[' -z 1105156 ']' 00:05:25.799 20:19:18 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.799 20:19:18 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:25.799 20:19:18 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.799 20:19:18 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:25.799 20:19:18 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.058 [2024-07-15 20:19:18.227348] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:26.058 [2024-07-15 20:19:18.227413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105156 ] 00:05:26.058 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.058 [2024-07-15 20:19:18.300747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.058 [2024-07-15 20:19:18.374079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:26.058 [2024-07-15 20:19:18.374123] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1105156' to capture a snapshot of events at runtime. 00:05:26.058 [2024-07-15 20:19:18.374131] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:26.058 [2024-07-15 20:19:18.374137] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:26.058 [2024-07-15 20:19:18.374143] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1105156 for offline analysis/debug. 00:05:26.058 [2024-07-15 20:19:18.374164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.628 20:19:19 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:26.628 20:19:19 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:26.628 20:19:19 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.628 20:19:19 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:26.628 20:19:19 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:26.628 20:19:19 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:26.628 20:19:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.628 20:19:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.628 20:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:26.888 ************************************ 00:05:26.888 START TEST rpc_integrity 00:05:26.888 ************************************ 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:26.888 { 00:05:26.888 "name": "Malloc0", 00:05:26.888 "aliases": [ 00:05:26.888 "ccd99a77-604e-47df-95ae-69e9640376bc" 00:05:26.888 ], 00:05:26.888 "product_name": "Malloc disk", 00:05:26.888 "block_size": 512, 00:05:26.888 "num_blocks": 16384, 00:05:26.888 "uuid": "ccd99a77-604e-47df-95ae-69e9640376bc", 00:05:26.888 "assigned_rate_limits": { 00:05:26.888 "rw_ios_per_sec": 0, 00:05:26.888 "rw_mbytes_per_sec": 0, 00:05:26.888 "r_mbytes_per_sec": 0, 00:05:26.888 "w_mbytes_per_sec": 0 00:05:26.888 }, 00:05:26.888 "claimed": false, 00:05:26.888 "zoned": false, 00:05:26.888 "supported_io_types": { 00:05:26.888 "read": true, 00:05:26.888 "write": true, 00:05:26.888 "unmap": true, 00:05:26.888 "flush": true, 00:05:26.888 "reset": true, 00:05:26.888 "nvme_admin": false, 00:05:26.888 "nvme_io": false, 00:05:26.888 "nvme_io_md": false, 00:05:26.888 "write_zeroes": true, 00:05:26.888 "zcopy": true, 00:05:26.888 "get_zone_info": false, 00:05:26.888 "zone_management": false, 00:05:26.888 "zone_append": false, 00:05:26.888 "compare": false, 00:05:26.888 "compare_and_write": false, 00:05:26.888 "abort": true, 00:05:26.888 "seek_hole": false, 00:05:26.888 "seek_data": false, 00:05:26.888 "copy": true, 00:05:26.888 "nvme_iov_md": false 00:05:26.888 }, 00:05:26.888 "memory_domains": [ 00:05:26.888 { 00:05:26.888 "dma_device_id": "system", 00:05:26.888 "dma_device_type": 1 00:05:26.888 }, 00:05:26.888 { 00:05:26.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.888 "dma_device_type": 2 00:05:26.888 } 00:05:26.888 ], 00:05:26.888 "driver_specific": {} 00:05:26.888 } 00:05:26.888 ]' 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.888 [2024-07-15 20:19:19.182174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:26.888 [2024-07-15 20:19:19.182207] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:26.888 [2024-07-15 20:19:19.182221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2066a10 00:05:26.888 [2024-07-15 20:19:19.182228] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:26.888 [2024-07-15 20:19:19.183584] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:26.888 [2024-07-15 20:19:19.183605] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:26.888 Passthru0 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:26.888 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.888 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:26.888 { 00:05:26.888 "name": "Malloc0", 00:05:26.888 "aliases": [ 00:05:26.888 "ccd99a77-604e-47df-95ae-69e9640376bc" 00:05:26.888 ], 00:05:26.888 "product_name": "Malloc disk", 00:05:26.888 "block_size": 512, 00:05:26.888 "num_blocks": 16384, 00:05:26.888 "uuid": "ccd99a77-604e-47df-95ae-69e9640376bc", 00:05:26.888 "assigned_rate_limits": { 00:05:26.888 "rw_ios_per_sec": 0, 00:05:26.888 "rw_mbytes_per_sec": 0, 00:05:26.888 "r_mbytes_per_sec": 0, 00:05:26.888 "w_mbytes_per_sec": 0 00:05:26.888 }, 00:05:26.888 "claimed": true, 00:05:26.888 "claim_type": "exclusive_write", 00:05:26.888 "zoned": false, 00:05:26.888 "supported_io_types": { 00:05:26.888 "read": true, 00:05:26.888 "write": true, 00:05:26.888 "unmap": true, 00:05:26.888 "flush": true, 00:05:26.888 "reset": true, 00:05:26.888 "nvme_admin": false, 00:05:26.888 "nvme_io": false, 00:05:26.888 "nvme_io_md": false, 00:05:26.888 "write_zeroes": true, 00:05:26.888 "zcopy": true, 00:05:26.888 "get_zone_info": false, 00:05:26.888 "zone_management": false, 00:05:26.888 "zone_append": false, 00:05:26.888 "compare": false, 00:05:26.888 "compare_and_write": false, 00:05:26.888 "abort": true, 00:05:26.888 "seek_hole": false, 00:05:26.888 "seek_data": false, 00:05:26.888 "copy": true, 00:05:26.888 "nvme_iov_md": false 00:05:26.888 }, 00:05:26.888 "memory_domains": [ 00:05:26.888 { 00:05:26.888 "dma_device_id": "system", 00:05:26.888 "dma_device_type": 1 00:05:26.888 }, 00:05:26.888 { 00:05:26.888 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.888 "dma_device_type": 2 00:05:26.888 } 00:05:26.888 ], 00:05:26.888 "driver_specific": {} 00:05:26.888 }, 00:05:26.888 { 00:05:26.888 "name": "Passthru0", 00:05:26.888 "aliases": [ 00:05:26.888 "8f444d2a-c7f1-5f70-b77d-f289575c13b1" 00:05:26.888 ], 00:05:26.888 "product_name": "passthru", 00:05:26.888 "block_size": 512, 00:05:26.888 "num_blocks": 16384, 00:05:26.888 "uuid": "8f444d2a-c7f1-5f70-b77d-f289575c13b1", 00:05:26.888 "assigned_rate_limits": { 00:05:26.888 "rw_ios_per_sec": 0, 00:05:26.888 "rw_mbytes_per_sec": 0, 00:05:26.888 "r_mbytes_per_sec": 0, 00:05:26.888 "w_mbytes_per_sec": 0 00:05:26.888 }, 00:05:26.888 "claimed": false, 00:05:26.888 "zoned": false, 00:05:26.888 "supported_io_types": { 00:05:26.888 "read": true, 00:05:26.888 "write": true, 00:05:26.888 "unmap": true, 00:05:26.888 "flush": true, 00:05:26.888 "reset": true, 00:05:26.888 "nvme_admin": false, 00:05:26.888 "nvme_io": false, 00:05:26.888 "nvme_io_md": false, 00:05:26.888 "write_zeroes": true, 00:05:26.888 "zcopy": true, 00:05:26.888 "get_zone_info": false, 00:05:26.888 "zone_management": false, 00:05:26.888 "zone_append": false, 00:05:26.888 "compare": false, 00:05:26.888 "compare_and_write": false, 00:05:26.888 "abort": true, 00:05:26.888 "seek_hole": false, 00:05:26.889 "seek_data": false, 00:05:26.889 "copy": true, 00:05:26.889 "nvme_iov_md": false 00:05:26.889 }, 00:05:26.889 "memory_domains": [ 00:05:26.889 { 00:05:26.889 "dma_device_id": "system", 00:05:26.889 "dma_device_type": 1 00:05:26.889 }, 00:05:26.889 { 00:05:26.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:26.889 "dma_device_type": 2 00:05:26.889 } 00:05:26.889 ], 00:05:26.889 "driver_specific": { 00:05:26.889 "passthru": { 00:05:26.889 "name": "Passthru0", 00:05:26.889 "base_bdev_name": "Malloc0" 00:05:26.889 } 00:05:26.889 } 00:05:26.889 } 00:05:26.889 ]' 00:05:26.889 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:26.889 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:26.889 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:26.889 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:26.889 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.179 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.179 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.179 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.179 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.179 20:19:19 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.179 00:05:27.179 real 0m0.298s 00:05:27.179 user 0m0.185s 00:05:27.179 sys 0m0.046s 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.179 20:19:19 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 ************************************ 00:05:27.179 END TEST rpc_integrity 00:05:27.179 ************************************ 00:05:27.179 20:19:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.179 20:19:19 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:27.179 20:19:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.179 20:19:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.179 20:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 ************************************ 00:05:27.179 START TEST rpc_plugins 00:05:27.179 ************************************ 00:05:27.179 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:27.179 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:27.179 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.179 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.179 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:27.179 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:27.179 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.179 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.179 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.179 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:27.179 { 00:05:27.179 "name": "Malloc1", 00:05:27.179 "aliases": [ 00:05:27.179 "204d2b0c-7fee-4ec0-81c7-237cca57bbbf" 00:05:27.179 ], 00:05:27.179 "product_name": "Malloc disk", 00:05:27.179 "block_size": 4096, 00:05:27.179 "num_blocks": 256, 00:05:27.179 "uuid": "204d2b0c-7fee-4ec0-81c7-237cca57bbbf", 00:05:27.179 "assigned_rate_limits": { 00:05:27.179 "rw_ios_per_sec": 0, 00:05:27.179 "rw_mbytes_per_sec": 0, 00:05:27.179 "r_mbytes_per_sec": 0, 00:05:27.179 "w_mbytes_per_sec": 0 00:05:27.179 }, 00:05:27.179 "claimed": false, 00:05:27.179 "zoned": false, 00:05:27.179 "supported_io_types": { 00:05:27.179 "read": true, 00:05:27.179 "write": true, 00:05:27.179 "unmap": true, 00:05:27.179 "flush": true, 00:05:27.179 "reset": true, 00:05:27.179 "nvme_admin": false, 00:05:27.179 "nvme_io": false, 00:05:27.179 "nvme_io_md": false, 00:05:27.179 "write_zeroes": true, 00:05:27.179 "zcopy": true, 00:05:27.179 "get_zone_info": false, 00:05:27.179 "zone_management": false, 00:05:27.179 "zone_append": false, 00:05:27.179 "compare": false, 00:05:27.179 "compare_and_write": false, 00:05:27.179 "abort": true, 00:05:27.179 "seek_hole": false, 00:05:27.179 "seek_data": false, 00:05:27.179 "copy": true, 00:05:27.179 "nvme_iov_md": false 00:05:27.179 }, 00:05:27.179 "memory_domains": [ 00:05:27.179 { 00:05:27.179 "dma_device_id": "system", 00:05:27.179 "dma_device_type": 1 00:05:27.179 }, 00:05:27.179 { 00:05:27.179 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.179 "dma_device_type": 2 00:05:27.179 } 00:05:27.179 ], 00:05:27.179 "driver_specific": {} 00:05:27.179 } 00:05:27.179 ]' 00:05:27.180 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:27.180 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:27.180 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:27.180 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.180 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.180 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.180 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:27.180 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.180 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.180 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.180 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:27.180 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:27.440 20:19:19 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:27.440 00:05:27.440 real 0m0.151s 00:05:27.440 user 0m0.087s 00:05:27.440 sys 0m0.026s 00:05:27.440 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.440 20:19:19 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:27.440 ************************************ 00:05:27.440 END TEST rpc_plugins 00:05:27.440 ************************************ 00:05:27.440 20:19:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.440 20:19:19 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:27.440 20:19:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.440 20:19:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.440 20:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.440 ************************************ 00:05:27.440 START TEST rpc_trace_cmd_test 00:05:27.440 ************************************ 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:27.440 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1105156", 00:05:27.440 "tpoint_group_mask": "0x8", 00:05:27.440 "iscsi_conn": { 00:05:27.440 "mask": "0x2", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "scsi": { 00:05:27.440 "mask": "0x4", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "bdev": { 00:05:27.440 "mask": "0x8", 00:05:27.440 "tpoint_mask": "0xffffffffffffffff" 00:05:27.440 }, 00:05:27.440 "nvmf_rdma": { 00:05:27.440 "mask": "0x10", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "nvmf_tcp": { 00:05:27.440 "mask": "0x20", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "ftl": { 00:05:27.440 "mask": "0x40", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "blobfs": { 00:05:27.440 "mask": "0x80", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "dsa": { 00:05:27.440 "mask": "0x200", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "thread": { 00:05:27.440 "mask": "0x400", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "nvme_pcie": { 00:05:27.440 "mask": "0x800", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "iaa": { 00:05:27.440 "mask": "0x1000", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "nvme_tcp": { 00:05:27.440 "mask": "0x2000", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "bdev_nvme": { 00:05:27.440 "mask": "0x4000", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 }, 00:05:27.440 "sock": { 00:05:27.440 "mask": "0x8000", 00:05:27.440 "tpoint_mask": "0x0" 00:05:27.440 } 00:05:27.440 }' 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:27.440 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:27.701 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:27.701 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:27.701 20:19:19 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:27.701 00:05:27.701 real 0m0.245s 00:05:27.701 user 0m0.207s 00:05:27.701 sys 0m0.030s 00:05:27.701 20:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.701 20:19:19 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:27.701 ************************************ 00:05:27.701 END TEST rpc_trace_cmd_test 00:05:27.701 ************************************ 00:05:27.701 20:19:19 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.701 20:19:19 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:27.701 20:19:19 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:27.701 20:19:19 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:27.701 20:19:19 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:27.701 20:19:19 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.701 20:19:19 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:27.701 ************************************ 00:05:27.701 START TEST rpc_daemon_integrity 00:05:27.701 ************************************ 00:05:27.701 20:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:27.701 20:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:27.701 20:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.701 20:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.701 20:19:19 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.701 20:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:27.701 20:19:19 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:27.701 { 00:05:27.701 "name": "Malloc2", 00:05:27.701 "aliases": [ 00:05:27.701 "6d4048d7-7df9-4d67-9b7b-57e6e55e6f3e" 00:05:27.701 ], 00:05:27.701 "product_name": "Malloc disk", 00:05:27.701 "block_size": 512, 00:05:27.701 "num_blocks": 16384, 00:05:27.701 "uuid": "6d4048d7-7df9-4d67-9b7b-57e6e55e6f3e", 00:05:27.701 "assigned_rate_limits": { 00:05:27.701 "rw_ios_per_sec": 0, 00:05:27.701 "rw_mbytes_per_sec": 0, 00:05:27.701 "r_mbytes_per_sec": 0, 00:05:27.701 "w_mbytes_per_sec": 0 00:05:27.701 }, 00:05:27.701 "claimed": false, 00:05:27.701 "zoned": false, 00:05:27.701 "supported_io_types": { 00:05:27.701 "read": true, 00:05:27.701 "write": true, 00:05:27.701 "unmap": true, 00:05:27.701 "flush": true, 00:05:27.701 "reset": true, 00:05:27.701 "nvme_admin": false, 00:05:27.701 "nvme_io": false, 00:05:27.701 "nvme_io_md": false, 00:05:27.701 "write_zeroes": true, 00:05:27.701 "zcopy": true, 00:05:27.701 "get_zone_info": false, 00:05:27.701 "zone_management": false, 00:05:27.701 "zone_append": false, 00:05:27.701 "compare": false, 00:05:27.701 "compare_and_write": false, 00:05:27.701 "abort": true, 00:05:27.701 "seek_hole": false, 00:05:27.701 "seek_data": false, 00:05:27.701 "copy": true, 00:05:27.701 "nvme_iov_md": false 00:05:27.701 }, 00:05:27.701 "memory_domains": [ 00:05:27.701 { 00:05:27.701 "dma_device_id": "system", 00:05:27.701 "dma_device_type": 1 00:05:27.701 }, 00:05:27.701 { 00:05:27.701 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.701 "dma_device_type": 2 00:05:27.701 } 00:05:27.701 ], 00:05:27.701 "driver_specific": {} 00:05:27.701 } 00:05:27.701 ]' 00:05:27.701 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.961 [2024-07-15 20:19:20.092656] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:27.961 [2024-07-15 20:19:20.092687] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:27.961 [2024-07-15 20:19:20.092701] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21fdfe0 00:05:27.961 [2024-07-15 20:19:20.092708] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:27.961 [2024-07-15 20:19:20.093930] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:27.961 [2024-07-15 20:19:20.093950] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:27.961 Passthru0 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.961 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:27.961 { 00:05:27.961 "name": "Malloc2", 00:05:27.961 "aliases": [ 00:05:27.961 "6d4048d7-7df9-4d67-9b7b-57e6e55e6f3e" 00:05:27.961 ], 00:05:27.961 "product_name": "Malloc disk", 00:05:27.961 "block_size": 512, 00:05:27.961 "num_blocks": 16384, 00:05:27.961 "uuid": "6d4048d7-7df9-4d67-9b7b-57e6e55e6f3e", 00:05:27.961 "assigned_rate_limits": { 00:05:27.961 "rw_ios_per_sec": 0, 00:05:27.961 "rw_mbytes_per_sec": 0, 00:05:27.961 "r_mbytes_per_sec": 0, 00:05:27.961 "w_mbytes_per_sec": 0 00:05:27.961 }, 00:05:27.961 "claimed": true, 00:05:27.961 "claim_type": "exclusive_write", 00:05:27.961 "zoned": false, 00:05:27.961 "supported_io_types": { 00:05:27.961 "read": true, 00:05:27.961 "write": true, 00:05:27.961 "unmap": true, 00:05:27.961 "flush": true, 00:05:27.961 "reset": true, 00:05:27.961 "nvme_admin": false, 00:05:27.961 "nvme_io": false, 00:05:27.961 "nvme_io_md": false, 00:05:27.961 "write_zeroes": true, 00:05:27.961 "zcopy": true, 00:05:27.962 "get_zone_info": false, 00:05:27.962 "zone_management": false, 00:05:27.962 "zone_append": false, 00:05:27.962 "compare": false, 00:05:27.962 "compare_and_write": false, 00:05:27.962 "abort": true, 00:05:27.962 "seek_hole": false, 00:05:27.962 "seek_data": false, 00:05:27.962 "copy": true, 00:05:27.962 "nvme_iov_md": false 00:05:27.962 }, 00:05:27.962 "memory_domains": [ 00:05:27.962 { 00:05:27.962 "dma_device_id": "system", 00:05:27.962 "dma_device_type": 1 00:05:27.962 }, 00:05:27.962 { 00:05:27.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.962 "dma_device_type": 2 00:05:27.962 } 00:05:27.962 ], 00:05:27.962 "driver_specific": {} 00:05:27.962 }, 00:05:27.962 { 00:05:27.962 "name": "Passthru0", 00:05:27.962 "aliases": [ 00:05:27.962 "91a3863f-49d5-57e0-81c2-5de7a076922d" 00:05:27.962 ], 00:05:27.962 "product_name": "passthru", 00:05:27.962 "block_size": 512, 00:05:27.962 "num_blocks": 16384, 00:05:27.962 "uuid": "91a3863f-49d5-57e0-81c2-5de7a076922d", 00:05:27.962 "assigned_rate_limits": { 00:05:27.962 "rw_ios_per_sec": 0, 00:05:27.962 "rw_mbytes_per_sec": 0, 00:05:27.962 "r_mbytes_per_sec": 0, 00:05:27.962 "w_mbytes_per_sec": 0 00:05:27.962 }, 00:05:27.962 "claimed": false, 00:05:27.962 "zoned": false, 00:05:27.962 "supported_io_types": { 00:05:27.962 "read": true, 00:05:27.962 "write": true, 00:05:27.962 "unmap": true, 00:05:27.962 "flush": true, 00:05:27.962 "reset": true, 00:05:27.962 "nvme_admin": false, 00:05:27.962 "nvme_io": false, 00:05:27.962 "nvme_io_md": false, 00:05:27.962 "write_zeroes": true, 00:05:27.962 "zcopy": true, 00:05:27.962 "get_zone_info": false, 00:05:27.962 "zone_management": false, 00:05:27.962 "zone_append": false, 00:05:27.962 "compare": false, 00:05:27.962 "compare_and_write": false, 00:05:27.962 "abort": true, 00:05:27.962 "seek_hole": false, 00:05:27.962 "seek_data": false, 00:05:27.962 "copy": true, 00:05:27.962 "nvme_iov_md": false 00:05:27.962 }, 00:05:27.962 "memory_domains": [ 00:05:27.962 { 00:05:27.962 "dma_device_id": "system", 00:05:27.962 "dma_device_type": 1 00:05:27.962 }, 00:05:27.962 { 00:05:27.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:27.962 "dma_device_type": 2 00:05:27.962 } 00:05:27.962 ], 00:05:27.962 "driver_specific": { 00:05:27.962 "passthru": { 00:05:27.962 "name": "Passthru0", 00:05:27.962 "base_bdev_name": "Malloc2" 00:05:27.962 } 00:05:27.962 } 00:05:27.962 } 00:05:27.962 ]' 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:27.962 00:05:27.962 real 0m0.294s 00:05:27.962 user 0m0.187s 00:05:27.962 sys 0m0.043s 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:27.962 20:19:20 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:27.962 ************************************ 00:05:27.962 END TEST rpc_daemon_integrity 00:05:27.962 ************************************ 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:27.962 20:19:20 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:27.962 20:19:20 rpc -- rpc/rpc.sh@84 -- # killprocess 1105156 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@948 -- # '[' -z 1105156 ']' 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@952 -- # kill -0 1105156 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@953 -- # uname 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1105156 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1105156' 00:05:27.962 killing process with pid 1105156 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@967 -- # kill 1105156 00:05:27.962 20:19:20 rpc -- common/autotest_common.sh@972 -- # wait 1105156 00:05:28.221 00:05:28.221 real 0m2.481s 00:05:28.221 user 0m3.261s 00:05:28.221 sys 0m0.699s 00:05:28.221 20:19:20 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.221 20:19:20 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.221 ************************************ 00:05:28.221 END TEST rpc 00:05:28.221 ************************************ 00:05:28.221 20:19:20 -- common/autotest_common.sh@1142 -- # return 0 00:05:28.221 20:19:20 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:28.221 20:19:20 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.221 20:19:20 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.221 20:19:20 -- common/autotest_common.sh@10 -- # set +x 00:05:28.482 ************************************ 00:05:28.482 START TEST skip_rpc 00:05:28.482 ************************************ 00:05:28.482 20:19:20 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:28.482 * Looking for test storage... 00:05:28.482 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:28.482 20:19:20 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:28.482 20:19:20 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:28.482 20:19:20 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:28.482 20:19:20 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.482 20:19:20 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.482 20:19:20 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:28.482 ************************************ 00:05:28.482 START TEST skip_rpc 00:05:28.482 ************************************ 00:05:28.482 20:19:20 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:28.482 20:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1105726 00:05:28.482 20:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:28.482 20:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:28.482 20:19:20 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:28.482 [2024-07-15 20:19:20.824717] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:28.482 [2024-07-15 20:19:20.824771] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1105726 ] 00:05:28.482 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.742 [2024-07-15 20:19:20.895009] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.742 [2024-07-15 20:19:20.965894] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:34.020 20:19:25 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1105726 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 1105726 ']' 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 1105726 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1105726 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1105726' 00:05:34.021 killing process with pid 1105726 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 1105726 00:05:34.021 20:19:25 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 1105726 00:05:34.021 00:05:34.021 real 0m5.279s 00:05:34.021 user 0m5.085s 00:05:34.021 sys 0m0.232s 00:05:34.021 20:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.021 20:19:26 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.021 ************************************ 00:05:34.021 END TEST skip_rpc 00:05:34.021 ************************************ 00:05:34.021 20:19:26 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:34.021 20:19:26 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:34.021 20:19:26 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.021 20:19:26 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.021 20:19:26 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:34.021 ************************************ 00:05:34.021 START TEST skip_rpc_with_json 00:05:34.021 ************************************ 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1106955 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1106955 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 1106955 ']' 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:34.021 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.021 [2024-07-15 20:19:26.170597] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:34.021 [2024-07-15 20:19:26.170649] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1106955 ] 00:05:34.021 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.021 [2024-07-15 20:19:26.238577] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.021 [2024-07-15 20:19:26.309112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.589 [2024-07-15 20:19:26.934920] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:34.589 request: 00:05:34.589 { 00:05:34.589 "trtype": "tcp", 00:05:34.589 "method": "nvmf_get_transports", 00:05:34.589 "req_id": 1 00:05:34.589 } 00:05:34.589 Got JSON-RPC error response 00:05:34.589 response: 00:05:34.589 { 00:05:34.589 "code": -19, 00:05:34.589 "message": "No such device" 00:05:34.589 } 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.589 [2024-07-15 20:19:26.947049] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:34.589 20:19:26 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:34.849 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:34.849 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:34.849 { 00:05:34.849 "subsystems": [ 00:05:34.849 { 00:05:34.849 "subsystem": "vfio_user_target", 00:05:34.849 "config": null 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "keyring", 00:05:34.849 "config": [] 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "iobuf", 00:05:34.849 "config": [ 00:05:34.849 { 00:05:34.849 "method": "iobuf_set_options", 00:05:34.849 "params": { 00:05:34.849 "small_pool_count": 8192, 00:05:34.849 "large_pool_count": 1024, 00:05:34.849 "small_bufsize": 8192, 00:05:34.849 "large_bufsize": 135168 00:05:34.849 } 00:05:34.849 } 00:05:34.849 ] 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "sock", 00:05:34.849 "config": [ 00:05:34.849 { 00:05:34.849 "method": "sock_set_default_impl", 00:05:34.849 "params": { 00:05:34.849 "impl_name": "posix" 00:05:34.849 } 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "method": "sock_impl_set_options", 00:05:34.849 "params": { 00:05:34.849 "impl_name": "ssl", 00:05:34.849 "recv_buf_size": 4096, 00:05:34.849 "send_buf_size": 4096, 00:05:34.849 "enable_recv_pipe": true, 00:05:34.849 "enable_quickack": false, 00:05:34.849 "enable_placement_id": 0, 00:05:34.849 "enable_zerocopy_send_server": true, 00:05:34.849 "enable_zerocopy_send_client": false, 00:05:34.849 "zerocopy_threshold": 0, 00:05:34.849 "tls_version": 0, 00:05:34.849 "enable_ktls": false 00:05:34.849 } 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "method": "sock_impl_set_options", 00:05:34.849 "params": { 00:05:34.849 "impl_name": "posix", 00:05:34.849 "recv_buf_size": 2097152, 00:05:34.849 "send_buf_size": 2097152, 00:05:34.849 "enable_recv_pipe": true, 00:05:34.849 "enable_quickack": false, 00:05:34.849 "enable_placement_id": 0, 00:05:34.849 "enable_zerocopy_send_server": true, 00:05:34.849 "enable_zerocopy_send_client": false, 00:05:34.849 "zerocopy_threshold": 0, 00:05:34.849 "tls_version": 0, 00:05:34.849 "enable_ktls": false 00:05:34.849 } 00:05:34.849 } 00:05:34.849 ] 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "vmd", 00:05:34.849 "config": [] 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "accel", 00:05:34.849 "config": [ 00:05:34.849 { 00:05:34.849 "method": "accel_set_options", 00:05:34.849 "params": { 00:05:34.849 "small_cache_size": 128, 00:05:34.849 "large_cache_size": 16, 00:05:34.849 "task_count": 2048, 00:05:34.849 "sequence_count": 2048, 00:05:34.849 "buf_count": 2048 00:05:34.849 } 00:05:34.849 } 00:05:34.849 ] 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "bdev", 00:05:34.849 "config": [ 00:05:34.849 { 00:05:34.849 "method": "bdev_set_options", 00:05:34.849 "params": { 00:05:34.849 "bdev_io_pool_size": 65535, 00:05:34.849 "bdev_io_cache_size": 256, 00:05:34.849 "bdev_auto_examine": true, 00:05:34.849 "iobuf_small_cache_size": 128, 00:05:34.849 "iobuf_large_cache_size": 16 00:05:34.849 } 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "method": "bdev_raid_set_options", 00:05:34.849 "params": { 00:05:34.849 "process_window_size_kb": 1024 00:05:34.849 } 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "method": "bdev_iscsi_set_options", 00:05:34.849 "params": { 00:05:34.849 "timeout_sec": 30 00:05:34.849 } 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "method": "bdev_nvme_set_options", 00:05:34.849 "params": { 00:05:34.849 "action_on_timeout": "none", 00:05:34.849 "timeout_us": 0, 00:05:34.849 "timeout_admin_us": 0, 00:05:34.849 "keep_alive_timeout_ms": 10000, 00:05:34.849 "arbitration_burst": 0, 00:05:34.849 "low_priority_weight": 0, 00:05:34.849 "medium_priority_weight": 0, 00:05:34.849 "high_priority_weight": 0, 00:05:34.849 "nvme_adminq_poll_period_us": 10000, 00:05:34.849 "nvme_ioq_poll_period_us": 0, 00:05:34.849 "io_queue_requests": 0, 00:05:34.849 "delay_cmd_submit": true, 00:05:34.849 "transport_retry_count": 4, 00:05:34.849 "bdev_retry_count": 3, 00:05:34.849 "transport_ack_timeout": 0, 00:05:34.849 "ctrlr_loss_timeout_sec": 0, 00:05:34.849 "reconnect_delay_sec": 0, 00:05:34.849 "fast_io_fail_timeout_sec": 0, 00:05:34.849 "disable_auto_failback": false, 00:05:34.849 "generate_uuids": false, 00:05:34.849 "transport_tos": 0, 00:05:34.849 "nvme_error_stat": false, 00:05:34.849 "rdma_srq_size": 0, 00:05:34.849 "io_path_stat": false, 00:05:34.849 "allow_accel_sequence": false, 00:05:34.849 "rdma_max_cq_size": 0, 00:05:34.849 "rdma_cm_event_timeout_ms": 0, 00:05:34.849 "dhchap_digests": [ 00:05:34.849 "sha256", 00:05:34.849 "sha384", 00:05:34.849 "sha512" 00:05:34.849 ], 00:05:34.849 "dhchap_dhgroups": [ 00:05:34.849 "null", 00:05:34.849 "ffdhe2048", 00:05:34.849 "ffdhe3072", 00:05:34.849 "ffdhe4096", 00:05:34.849 "ffdhe6144", 00:05:34.849 "ffdhe8192" 00:05:34.849 ] 00:05:34.849 } 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "method": "bdev_nvme_set_hotplug", 00:05:34.849 "params": { 00:05:34.849 "period_us": 100000, 00:05:34.849 "enable": false 00:05:34.849 } 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "method": "bdev_wait_for_examine" 00:05:34.849 } 00:05:34.849 ] 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "scsi", 00:05:34.849 "config": null 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "scheduler", 00:05:34.849 "config": [ 00:05:34.849 { 00:05:34.849 "method": "framework_set_scheduler", 00:05:34.849 "params": { 00:05:34.849 "name": "static" 00:05:34.849 } 00:05:34.849 } 00:05:34.849 ] 00:05:34.849 }, 00:05:34.849 { 00:05:34.849 "subsystem": "vhost_scsi", 00:05:34.849 "config": [] 00:05:34.849 }, 00:05:34.850 { 00:05:34.850 "subsystem": "vhost_blk", 00:05:34.850 "config": [] 00:05:34.850 }, 00:05:34.850 { 00:05:34.850 "subsystem": "ublk", 00:05:34.850 "config": [] 00:05:34.850 }, 00:05:34.850 { 00:05:34.850 "subsystem": "nbd", 00:05:34.850 "config": [] 00:05:34.850 }, 00:05:34.850 { 00:05:34.850 "subsystem": "nvmf", 00:05:34.850 "config": [ 00:05:34.850 { 00:05:34.850 "method": "nvmf_set_config", 00:05:34.850 "params": { 00:05:34.850 "discovery_filter": "match_any", 00:05:34.850 "admin_cmd_passthru": { 00:05:34.850 "identify_ctrlr": false 00:05:34.850 } 00:05:34.850 } 00:05:34.850 }, 00:05:34.850 { 00:05:34.850 "method": "nvmf_set_max_subsystems", 00:05:34.850 "params": { 00:05:34.850 "max_subsystems": 1024 00:05:34.850 } 00:05:34.850 }, 00:05:34.850 { 00:05:34.850 "method": "nvmf_set_crdt", 00:05:34.850 "params": { 00:05:34.850 "crdt1": 0, 00:05:34.850 "crdt2": 0, 00:05:34.850 "crdt3": 0 00:05:34.850 } 00:05:34.850 }, 00:05:34.850 { 00:05:34.850 "method": "nvmf_create_transport", 00:05:34.850 "params": { 00:05:34.850 "trtype": "TCP", 00:05:34.850 "max_queue_depth": 128, 00:05:34.850 "max_io_qpairs_per_ctrlr": 127, 00:05:34.850 "in_capsule_data_size": 4096, 00:05:34.850 "max_io_size": 131072, 00:05:34.850 "io_unit_size": 131072, 00:05:34.850 "max_aq_depth": 128, 00:05:34.850 "num_shared_buffers": 511, 00:05:34.850 "buf_cache_size": 4294967295, 00:05:34.850 "dif_insert_or_strip": false, 00:05:34.850 "zcopy": false, 00:05:34.850 "c2h_success": true, 00:05:34.850 "sock_priority": 0, 00:05:34.850 "abort_timeout_sec": 1, 00:05:34.850 "ack_timeout": 0, 00:05:34.850 "data_wr_pool_size": 0 00:05:34.850 } 00:05:34.850 } 00:05:34.850 ] 00:05:34.850 }, 00:05:34.850 { 00:05:34.850 "subsystem": "iscsi", 00:05:34.850 "config": [ 00:05:34.850 { 00:05:34.850 "method": "iscsi_set_options", 00:05:34.850 "params": { 00:05:34.850 "node_base": "iqn.2016-06.io.spdk", 00:05:34.850 "max_sessions": 128, 00:05:34.850 "max_connections_per_session": 2, 00:05:34.850 "max_queue_depth": 64, 00:05:34.850 "default_time2wait": 2, 00:05:34.850 "default_time2retain": 20, 00:05:34.850 "first_burst_length": 8192, 00:05:34.850 "immediate_data": true, 00:05:34.850 "allow_duplicated_isid": false, 00:05:34.850 "error_recovery_level": 0, 00:05:34.850 "nop_timeout": 60, 00:05:34.850 "nop_in_interval": 30, 00:05:34.850 "disable_chap": false, 00:05:34.850 "require_chap": false, 00:05:34.850 "mutual_chap": false, 00:05:34.850 "chap_group": 0, 00:05:34.850 "max_large_datain_per_connection": 64, 00:05:34.850 "max_r2t_per_connection": 4, 00:05:34.850 "pdu_pool_size": 36864, 00:05:34.850 "immediate_data_pool_size": 16384, 00:05:34.850 "data_out_pool_size": 2048 00:05:34.850 } 00:05:34.850 } 00:05:34.850 ] 00:05:34.850 } 00:05:34.850 ] 00:05:34.850 } 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1106955 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1106955 ']' 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1106955 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1106955 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1106955' 00:05:34.850 killing process with pid 1106955 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1106955 00:05:34.850 20:19:27 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1106955 00:05:35.109 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1107078 00:05:35.109 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:35.109 20:19:27 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1107078 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 1107078 ']' 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 1107078 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1107078 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1107078' 00:05:40.387 killing process with pid 1107078 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 1107078 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 1107078 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:40.387 00:05:40.387 real 0m6.514s 00:05:40.387 user 0m6.368s 00:05:40.387 sys 0m0.530s 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:40.387 ************************************ 00:05:40.387 END TEST skip_rpc_with_json 00:05:40.387 ************************************ 00:05:40.387 20:19:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.387 20:19:32 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:40.387 20:19:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.387 20:19:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.387 20:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.387 ************************************ 00:05:40.387 START TEST skip_rpc_with_delay 00:05:40.387 ************************************ 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:40.387 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:40.387 [2024-07-15 20:19:32.759445] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:40.387 [2024-07-15 20:19:32.759538] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:40.648 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:40.648 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:40.648 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:40.648 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:40.648 00:05:40.648 real 0m0.077s 00:05:40.648 user 0m0.054s 00:05:40.648 sys 0m0.022s 00:05:40.648 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:40.648 20:19:32 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:40.648 ************************************ 00:05:40.648 END TEST skip_rpc_with_delay 00:05:40.648 ************************************ 00:05:40.648 20:19:32 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:40.648 20:19:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:40.648 20:19:32 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:40.648 20:19:32 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:40.648 20:19:32 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:40.648 20:19:32 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:40.648 20:19:32 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.648 ************************************ 00:05:40.648 START TEST exit_on_failed_rpc_init 00:05:40.648 ************************************ 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1108383 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1108383 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 1108383 ']' 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:40.648 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:40.648 20:19:32 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:40.648 [2024-07-15 20:19:32.892263] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:40.648 [2024-07-15 20:19:32.892324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108383 ] 00:05:40.648 EAL: No free 2048 kB hugepages reported on node 1 00:05:40.648 [2024-07-15 20:19:32.959844] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:40.648 [2024-07-15 20:19:33.026070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:41.588 [2024-07-15 20:19:33.697687] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:41.588 [2024-07-15 20:19:33.697730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108452 ] 00:05:41.588 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.588 [2024-07-15 20:19:33.769891] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.588 [2024-07-15 20:19:33.834003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:41.588 [2024-07-15 20:19:33.834063] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:41.588 [2024-07-15 20:19:33.834072] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:41.588 [2024-07-15 20:19:33.834079] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1108383 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 1108383 ']' 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 1108383 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1108383 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1108383' 00:05:41.588 killing process with pid 1108383 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 1108383 00:05:41.588 20:19:33 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 1108383 00:05:41.849 00:05:41.849 real 0m1.304s 00:05:41.849 user 0m1.521s 00:05:41.849 sys 0m0.350s 00:05:41.849 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.849 20:19:34 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:41.849 ************************************ 00:05:41.849 END TEST exit_on_failed_rpc_init 00:05:41.849 ************************************ 00:05:41.849 20:19:34 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:41.849 20:19:34 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:41.849 00:05:41.849 real 0m13.573s 00:05:41.849 user 0m13.176s 00:05:41.849 sys 0m1.403s 00:05:41.849 20:19:34 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:41.849 20:19:34 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:41.849 ************************************ 00:05:41.849 END TEST skip_rpc 00:05:41.849 ************************************ 00:05:42.110 20:19:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.110 20:19:34 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:42.110 20:19:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.110 20:19:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.110 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 ************************************ 00:05:42.110 START TEST rpc_client 00:05:42.110 ************************************ 00:05:42.110 20:19:34 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:42.110 * Looking for test storage... 00:05:42.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:42.110 20:19:34 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:42.110 OK 00:05:42.110 20:19:34 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:42.110 00:05:42.110 real 0m0.133s 00:05:42.110 user 0m0.059s 00:05:42.110 sys 0m0.083s 00:05:42.110 20:19:34 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.110 20:19:34 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:42.110 ************************************ 00:05:42.110 END TEST rpc_client 00:05:42.110 ************************************ 00:05:42.110 20:19:34 -- common/autotest_common.sh@1142 -- # return 0 00:05:42.110 20:19:34 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:42.110 20:19:34 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.110 20:19:34 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.110 20:19:34 -- common/autotest_common.sh@10 -- # set +x 00:05:42.372 ************************************ 00:05:42.372 START TEST json_config 00:05:42.372 ************************************ 00:05:42.372 20:19:34 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:42.372 20:19:34 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:42.372 20:19:34 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:42.372 20:19:34 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:42.372 20:19:34 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:42.372 20:19:34 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:42.372 20:19:34 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.373 20:19:34 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.373 20:19:34 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.373 20:19:34 json_config -- paths/export.sh@5 -- # export PATH 00:05:42.373 20:19:34 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@47 -- # : 0 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:42.373 20:19:34 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:42.373 INFO: JSON configuration test init 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.373 20:19:34 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:42.373 20:19:34 json_config -- json_config/common.sh@9 -- # local app=target 00:05:42.373 20:19:34 json_config -- json_config/common.sh@10 -- # shift 00:05:42.373 20:19:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:42.373 20:19:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:42.373 20:19:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:42.373 20:19:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.373 20:19:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:42.373 20:19:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1108869 00:05:42.373 20:19:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:42.373 Waiting for target to run... 00:05:42.373 20:19:34 json_config -- json_config/common.sh@25 -- # waitforlisten 1108869 /var/tmp/spdk_tgt.sock 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 1108869 ']' 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:42.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:42.373 20:19:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.373 20:19:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:42.373 [2024-07-15 20:19:34.670875] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:42.373 [2024-07-15 20:19:34.670949] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1108869 ] 00:05:42.373 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.633 [2024-07-15 20:19:34.979540] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.895 [2024-07-15 20:19:35.029119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.226 20:19:35 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.226 20:19:35 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:43.226 20:19:35 json_config -- json_config/common.sh@26 -- # echo '' 00:05:43.226 00:05:43.226 20:19:35 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:43.226 20:19:35 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:43.226 20:19:35 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.226 20:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.226 20:19:35 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:43.226 20:19:35 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:43.226 20:19:35 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.226 20:19:35 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.226 20:19:35 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:43.226 20:19:35 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:43.226 20:19:35 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:43.841 20:19:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.841 20:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:43.841 20:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:43.841 20:19:36 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:43.841 20:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:43.841 20:19:36 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:43.841 20:19:36 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:43.841 20:19:36 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:44.101 20:19:36 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:44.101 20:19:36 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:44.101 20:19:36 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:44.101 20:19:36 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:44.101 20:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:44.101 MallocForNvmf0 00:05:44.101 20:19:36 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:44.101 20:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:44.361 MallocForNvmf1 00:05:44.361 20:19:36 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:44.361 20:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:44.361 [2024-07-15 20:19:36.666964] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.361 20:19:36 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.361 20:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:44.620 20:19:36 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:44.620 20:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:44.620 20:19:36 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:44.620 20:19:36 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:44.880 20:19:37 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:44.880 20:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:45.141 [2024-07-15 20:19:37.260861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:45.142 20:19:37 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:45.142 20:19:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.142 20:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.142 20:19:37 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:45.142 20:19:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.142 20:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.142 20:19:37 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:45.142 20:19:37 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.142 20:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:45.142 MallocBdevForConfigChangeCheck 00:05:45.142 20:19:37 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:45.142 20:19:37 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:45.142 20:19:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:45.402 20:19:37 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:45.402 20:19:37 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.661 20:19:37 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:45.661 INFO: shutting down applications... 00:05:45.661 20:19:37 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:45.661 20:19:37 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:45.661 20:19:37 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:45.661 20:19:37 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:45.922 Calling clear_iscsi_subsystem 00:05:45.922 Calling clear_nvmf_subsystem 00:05:45.922 Calling clear_nbd_subsystem 00:05:45.922 Calling clear_ublk_subsystem 00:05:45.922 Calling clear_vhost_blk_subsystem 00:05:45.922 Calling clear_vhost_scsi_subsystem 00:05:45.922 Calling clear_bdev_subsystem 00:05:45.922 20:19:38 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:45.922 20:19:38 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:45.922 20:19:38 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:45.922 20:19:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:45.922 20:19:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:45.922 20:19:38 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:46.182 20:19:38 json_config -- json_config/json_config.sh@345 -- # break 00:05:46.442 20:19:38 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:46.442 20:19:38 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:46.442 20:19:38 json_config -- json_config/common.sh@31 -- # local app=target 00:05:46.442 20:19:38 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:46.442 20:19:38 json_config -- json_config/common.sh@35 -- # [[ -n 1108869 ]] 00:05:46.442 20:19:38 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1108869 00:05:46.442 20:19:38 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:46.442 20:19:38 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.442 20:19:38 json_config -- json_config/common.sh@41 -- # kill -0 1108869 00:05:46.442 20:19:38 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:46.702 20:19:39 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:46.702 20:19:39 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:46.702 20:19:39 json_config -- json_config/common.sh@41 -- # kill -0 1108869 00:05:46.702 20:19:39 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:46.702 20:19:39 json_config -- json_config/common.sh@43 -- # break 00:05:46.702 20:19:39 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:46.702 20:19:39 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:46.702 SPDK target shutdown done 00:05:46.702 20:19:39 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:46.702 INFO: relaunching applications... 00:05:46.702 20:19:39 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.702 20:19:39 json_config -- json_config/common.sh@9 -- # local app=target 00:05:46.702 20:19:39 json_config -- json_config/common.sh@10 -- # shift 00:05:46.703 20:19:39 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:46.703 20:19:39 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:46.703 20:19:39 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:46.703 20:19:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.703 20:19:39 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:46.703 20:19:39 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1109726 00:05:46.703 20:19:39 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:46.703 Waiting for target to run... 00:05:46.703 20:19:39 json_config -- json_config/common.sh@25 -- # waitforlisten 1109726 /var/tmp/spdk_tgt.sock 00:05:46.703 20:19:39 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:46.703 20:19:39 json_config -- common/autotest_common.sh@829 -- # '[' -z 1109726 ']' 00:05:46.703 20:19:39 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:46.703 20:19:39 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.703 20:19:39 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:46.703 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:46.703 20:19:39 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.703 20:19:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:46.963 [2024-07-15 20:19:39.126801] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:46.964 [2024-07-15 20:19:39.126866] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1109726 ] 00:05:46.964 EAL: No free 2048 kB hugepages reported on node 1 00:05:47.224 [2024-07-15 20:19:39.547021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:47.485 [2024-07-15 20:19:39.609277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.746 [2024-07-15 20:19:40.113174] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.006 [2024-07-15 20:19:40.145525] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:48.006 20:19:40 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:48.006 20:19:40 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:48.006 20:19:40 json_config -- json_config/common.sh@26 -- # echo '' 00:05:48.006 00:05:48.007 20:19:40 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:05:48.007 20:19:40 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:48.007 INFO: Checking if target configuration is the same... 00:05:48.007 20:19:40 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.007 20:19:40 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:05:48.007 20:19:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.007 + '[' 2 -ne 2 ']' 00:05:48.007 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:48.007 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:48.007 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.007 +++ basename /dev/fd/62 00:05:48.007 ++ mktemp /tmp/62.XXX 00:05:48.007 + tmp_file_1=/tmp/62.Y64 00:05:48.007 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.007 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.007 + tmp_file_2=/tmp/spdk_tgt_config.json.OV2 00:05:48.007 + ret=0 00:05:48.007 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.267 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.267 + diff -u /tmp/62.Y64 /tmp/spdk_tgt_config.json.OV2 00:05:48.267 + echo 'INFO: JSON config files are the same' 00:05:48.267 INFO: JSON config files are the same 00:05:48.267 + rm /tmp/62.Y64 /tmp/spdk_tgt_config.json.OV2 00:05:48.267 + exit 0 00:05:48.267 20:19:40 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:05:48.267 20:19:40 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:48.267 INFO: changing configuration and checking if this can be detected... 00:05:48.267 20:19:40 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.267 20:19:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:48.528 20:19:40 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:05:48.528 20:19:40 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:48.528 20:19:40 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.528 + '[' 2 -ne 2 ']' 00:05:48.528 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:48.528 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:48.528 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:48.528 +++ basename /dev/fd/62 00:05:48.528 ++ mktemp /tmp/62.XXX 00:05:48.528 + tmp_file_1=/tmp/62.LIK 00:05:48.528 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:48.528 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:48.528 + tmp_file_2=/tmp/spdk_tgt_config.json.kKD 00:05:48.528 + ret=0 00:05:48.528 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.789 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:48.789 + diff -u /tmp/62.LIK /tmp/spdk_tgt_config.json.kKD 00:05:48.789 + ret=1 00:05:48.789 + echo '=== Start of file: /tmp/62.LIK ===' 00:05:48.789 + cat /tmp/62.LIK 00:05:48.789 + echo '=== End of file: /tmp/62.LIK ===' 00:05:48.789 + echo '' 00:05:48.789 + echo '=== Start of file: /tmp/spdk_tgt_config.json.kKD ===' 00:05:48.789 + cat /tmp/spdk_tgt_config.json.kKD 00:05:48.789 + echo '=== End of file: /tmp/spdk_tgt_config.json.kKD ===' 00:05:48.789 + echo '' 00:05:48.789 + rm /tmp/62.LIK /tmp/spdk_tgt_config.json.kKD 00:05:48.789 + exit 1 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:05:48.789 INFO: configuration change detected. 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@317 -- # [[ -n 1109726 ]] 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@193 -- # uname -s 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:48.789 20:19:41 json_config -- json_config/json_config.sh@323 -- # killprocess 1109726 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@948 -- # '[' -z 1109726 ']' 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@952 -- # kill -0 1109726 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@953 -- # uname 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.789 20:19:41 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1109726 00:05:49.050 20:19:41 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.050 20:19:41 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.050 20:19:41 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1109726' 00:05:49.050 killing process with pid 1109726 00:05:49.050 20:19:41 json_config -- common/autotest_common.sh@967 -- # kill 1109726 00:05:49.050 20:19:41 json_config -- common/autotest_common.sh@972 -- # wait 1109726 00:05:49.311 20:19:41 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:49.311 20:19:41 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:05:49.311 20:19:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:49.311 20:19:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.311 20:19:41 json_config -- json_config/json_config.sh@328 -- # return 0 00:05:49.311 20:19:41 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:05:49.311 INFO: Success 00:05:49.311 00:05:49.311 real 0m7.023s 00:05:49.311 user 0m8.240s 00:05:49.311 sys 0m1.858s 00:05:49.311 20:19:41 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.311 20:19:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:49.311 ************************************ 00:05:49.311 END TEST json_config 00:05:49.311 ************************************ 00:05:49.311 20:19:41 -- common/autotest_common.sh@1142 -- # return 0 00:05:49.311 20:19:41 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.311 20:19:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.311 20:19:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.311 20:19:41 -- common/autotest_common.sh@10 -- # set +x 00:05:49.311 ************************************ 00:05:49.311 START TEST json_config_extra_key 00:05:49.311 ************************************ 00:05:49.311 20:19:41 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:49.312 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.312 20:19:41 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.312 20:19:41 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.312 20:19:41 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.312 20:19:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.312 20:19:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.312 20:19:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.312 20:19:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:49.312 20:19:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:49.312 20:19:41 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:49.312 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:49.312 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:49.312 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:49.312 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:49.312 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:49.312 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:49.573 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:49.573 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:49.573 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:49.573 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:49.573 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:49.573 INFO: launching applications... 00:05:49.573 20:19:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1110481 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:49.573 Waiting for target to run... 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1110481 /var/tmp/spdk_tgt.sock 00:05:49.573 20:19:41 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 1110481 ']' 00:05:49.573 20:19:41 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:49.573 20:19:41 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:49.573 20:19:41 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.573 20:19:41 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:49.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:49.573 20:19:41 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.573 20:19:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:49.573 [2024-07-15 20:19:41.749402] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:49.573 [2024-07-15 20:19:41.749459] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110481 ] 00:05:49.573 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.834 [2024-07-15 20:19:41.982874] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.834 [2024-07-15 20:19:42.035125] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.404 20:19:42 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.404 20:19:42 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:50.404 00:05:50.404 20:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:50.404 INFO: shutting down applications... 00:05:50.404 20:19:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1110481 ]] 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1110481 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1110481 00:05:50.404 20:19:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:50.664 20:19:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:50.664 20:19:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:50.664 20:19:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1110481 00:05:50.664 20:19:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:50.664 20:19:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:50.664 20:19:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:50.664 20:19:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:50.664 SPDK target shutdown done 00:05:50.664 20:19:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:50.664 Success 00:05:50.664 00:05:50.664 real 0m1.429s 00:05:50.664 user 0m1.120s 00:05:50.664 sys 0m0.325s 00:05:50.664 20:19:43 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.664 20:19:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:50.664 ************************************ 00:05:50.664 END TEST json_config_extra_key 00:05:50.664 ************************************ 00:05:50.924 20:19:43 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.924 20:19:43 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.924 20:19:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.924 20:19:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.924 20:19:43 -- common/autotest_common.sh@10 -- # set +x 00:05:50.924 ************************************ 00:05:50.924 START TEST alias_rpc 00:05:50.924 ************************************ 00:05:50.924 20:19:43 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:50.924 * Looking for test storage... 00:05:50.924 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:50.924 20:19:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:50.924 20:19:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1110867 00:05:50.924 20:19:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1110867 00:05:50.924 20:19:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:50.924 20:19:43 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 1110867 ']' 00:05:50.924 20:19:43 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:50.924 20:19:43 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.924 20:19:43 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:50.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:50.924 20:19:43 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.924 20:19:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.924 [2024-07-15 20:19:43.247076] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:50.924 [2024-07-15 20:19:43.247130] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1110867 ] 00:05:50.924 EAL: No free 2048 kB hugepages reported on node 1 00:05:51.183 [2024-07-15 20:19:43.315608] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.183 [2024-07-15 20:19:43.381940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.755 20:19:44 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.755 20:19:44 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:51.755 20:19:44 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:52.016 20:19:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1110867 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 1110867 ']' 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 1110867 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1110867 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1110867' 00:05:52.016 killing process with pid 1110867 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@967 -- # kill 1110867 00:05:52.016 20:19:44 alias_rpc -- common/autotest_common.sh@972 -- # wait 1110867 00:05:52.277 00:05:52.277 real 0m1.375s 00:05:52.277 user 0m1.530s 00:05:52.277 sys 0m0.360s 00:05:52.277 20:19:44 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.277 20:19:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:52.277 ************************************ 00:05:52.277 END TEST alias_rpc 00:05:52.277 ************************************ 00:05:52.277 20:19:44 -- common/autotest_common.sh@1142 -- # return 0 00:05:52.277 20:19:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:52.277 20:19:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.277 20:19:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.277 20:19:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.277 20:19:44 -- common/autotest_common.sh@10 -- # set +x 00:05:52.277 ************************************ 00:05:52.277 START TEST spdkcli_tcp 00:05:52.277 ************************************ 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:52.277 * Looking for test storage... 00:05:52.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1111225 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1111225 00:05:52.277 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 1111225 ']' 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.277 20:19:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:52.537 [2024-07-15 20:19:44.706404] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:52.537 [2024-07-15 20:19:44.706458] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111225 ] 00:05:52.537 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.537 [2024-07-15 20:19:44.775029] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:52.537 [2024-07-15 20:19:44.844248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.537 [2024-07-15 20:19:44.844335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.798 20:19:44 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:52.798 20:19:44 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:52.798 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1111263 00:05:52.798 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:52.798 20:19:44 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:52.798 [ 00:05:52.798 "bdev_malloc_delete", 00:05:52.798 "bdev_malloc_create", 00:05:52.798 "bdev_null_resize", 00:05:52.798 "bdev_null_delete", 00:05:52.798 "bdev_null_create", 00:05:52.798 "bdev_nvme_cuse_unregister", 00:05:52.798 "bdev_nvme_cuse_register", 00:05:52.798 "bdev_opal_new_user", 00:05:52.798 "bdev_opal_set_lock_state", 00:05:52.798 "bdev_opal_delete", 00:05:52.798 "bdev_opal_get_info", 00:05:52.798 "bdev_opal_create", 00:05:52.798 "bdev_nvme_opal_revert", 00:05:52.798 "bdev_nvme_opal_init", 00:05:52.798 "bdev_nvme_send_cmd", 00:05:52.798 "bdev_nvme_get_path_iostat", 00:05:52.798 "bdev_nvme_get_mdns_discovery_info", 00:05:52.798 "bdev_nvme_stop_mdns_discovery", 00:05:52.798 "bdev_nvme_start_mdns_discovery", 00:05:52.798 "bdev_nvme_set_multipath_policy", 00:05:52.798 "bdev_nvme_set_preferred_path", 00:05:52.798 "bdev_nvme_get_io_paths", 00:05:52.798 "bdev_nvme_remove_error_injection", 00:05:52.798 "bdev_nvme_add_error_injection", 00:05:52.798 "bdev_nvme_get_discovery_info", 00:05:52.798 "bdev_nvme_stop_discovery", 00:05:52.798 "bdev_nvme_start_discovery", 00:05:52.798 "bdev_nvme_get_controller_health_info", 00:05:52.798 "bdev_nvme_disable_controller", 00:05:52.798 "bdev_nvme_enable_controller", 00:05:52.798 "bdev_nvme_reset_controller", 00:05:52.798 "bdev_nvme_get_transport_statistics", 00:05:52.798 "bdev_nvme_apply_firmware", 00:05:52.798 "bdev_nvme_detach_controller", 00:05:52.798 "bdev_nvme_get_controllers", 00:05:52.798 "bdev_nvme_attach_controller", 00:05:52.798 "bdev_nvme_set_hotplug", 00:05:52.798 "bdev_nvme_set_options", 00:05:52.798 "bdev_passthru_delete", 00:05:52.798 "bdev_passthru_create", 00:05:52.798 "bdev_lvol_set_parent_bdev", 00:05:52.798 "bdev_lvol_set_parent", 00:05:52.798 "bdev_lvol_check_shallow_copy", 00:05:52.798 "bdev_lvol_start_shallow_copy", 00:05:52.798 "bdev_lvol_grow_lvstore", 00:05:52.798 "bdev_lvol_get_lvols", 00:05:52.798 "bdev_lvol_get_lvstores", 00:05:52.798 "bdev_lvol_delete", 00:05:52.798 "bdev_lvol_set_read_only", 00:05:52.798 "bdev_lvol_resize", 00:05:52.798 "bdev_lvol_decouple_parent", 00:05:52.798 "bdev_lvol_inflate", 00:05:52.798 "bdev_lvol_rename", 00:05:52.798 "bdev_lvol_clone_bdev", 00:05:52.798 "bdev_lvol_clone", 00:05:52.798 "bdev_lvol_snapshot", 00:05:52.798 "bdev_lvol_create", 00:05:52.798 "bdev_lvol_delete_lvstore", 00:05:52.798 "bdev_lvol_rename_lvstore", 00:05:52.798 "bdev_lvol_create_lvstore", 00:05:52.798 "bdev_raid_set_options", 00:05:52.798 "bdev_raid_remove_base_bdev", 00:05:52.798 "bdev_raid_add_base_bdev", 00:05:52.798 "bdev_raid_delete", 00:05:52.798 "bdev_raid_create", 00:05:52.798 "bdev_raid_get_bdevs", 00:05:52.798 "bdev_error_inject_error", 00:05:52.798 "bdev_error_delete", 00:05:52.798 "bdev_error_create", 00:05:52.798 "bdev_split_delete", 00:05:52.799 "bdev_split_create", 00:05:52.799 "bdev_delay_delete", 00:05:52.799 "bdev_delay_create", 00:05:52.799 "bdev_delay_update_latency", 00:05:52.799 "bdev_zone_block_delete", 00:05:52.799 "bdev_zone_block_create", 00:05:52.799 "blobfs_create", 00:05:52.799 "blobfs_detect", 00:05:52.799 "blobfs_set_cache_size", 00:05:52.799 "bdev_aio_delete", 00:05:52.799 "bdev_aio_rescan", 00:05:52.799 "bdev_aio_create", 00:05:52.799 "bdev_ftl_set_property", 00:05:52.799 "bdev_ftl_get_properties", 00:05:52.799 "bdev_ftl_get_stats", 00:05:52.799 "bdev_ftl_unmap", 00:05:52.799 "bdev_ftl_unload", 00:05:52.799 "bdev_ftl_delete", 00:05:52.799 "bdev_ftl_load", 00:05:52.799 "bdev_ftl_create", 00:05:52.799 "bdev_virtio_attach_controller", 00:05:52.799 "bdev_virtio_scsi_get_devices", 00:05:52.799 "bdev_virtio_detach_controller", 00:05:52.799 "bdev_virtio_blk_set_hotplug", 00:05:52.799 "bdev_iscsi_delete", 00:05:52.799 "bdev_iscsi_create", 00:05:52.799 "bdev_iscsi_set_options", 00:05:52.799 "accel_error_inject_error", 00:05:52.799 "ioat_scan_accel_module", 00:05:52.799 "dsa_scan_accel_module", 00:05:52.799 "iaa_scan_accel_module", 00:05:52.799 "vfu_virtio_create_scsi_endpoint", 00:05:52.799 "vfu_virtio_scsi_remove_target", 00:05:52.799 "vfu_virtio_scsi_add_target", 00:05:52.799 "vfu_virtio_create_blk_endpoint", 00:05:52.799 "vfu_virtio_delete_endpoint", 00:05:52.799 "keyring_file_remove_key", 00:05:52.799 "keyring_file_add_key", 00:05:52.799 "keyring_linux_set_options", 00:05:52.799 "iscsi_get_histogram", 00:05:52.799 "iscsi_enable_histogram", 00:05:52.799 "iscsi_set_options", 00:05:52.799 "iscsi_get_auth_groups", 00:05:52.799 "iscsi_auth_group_remove_secret", 00:05:52.799 "iscsi_auth_group_add_secret", 00:05:52.799 "iscsi_delete_auth_group", 00:05:52.799 "iscsi_create_auth_group", 00:05:52.799 "iscsi_set_discovery_auth", 00:05:52.799 "iscsi_get_options", 00:05:52.799 "iscsi_target_node_request_logout", 00:05:52.799 "iscsi_target_node_set_redirect", 00:05:52.799 "iscsi_target_node_set_auth", 00:05:52.799 "iscsi_target_node_add_lun", 00:05:52.799 "iscsi_get_stats", 00:05:52.799 "iscsi_get_connections", 00:05:52.799 "iscsi_portal_group_set_auth", 00:05:52.799 "iscsi_start_portal_group", 00:05:52.799 "iscsi_delete_portal_group", 00:05:52.799 "iscsi_create_portal_group", 00:05:52.799 "iscsi_get_portal_groups", 00:05:52.799 "iscsi_delete_target_node", 00:05:52.799 "iscsi_target_node_remove_pg_ig_maps", 00:05:52.799 "iscsi_target_node_add_pg_ig_maps", 00:05:52.799 "iscsi_create_target_node", 00:05:52.799 "iscsi_get_target_nodes", 00:05:52.799 "iscsi_delete_initiator_group", 00:05:52.799 "iscsi_initiator_group_remove_initiators", 00:05:52.799 "iscsi_initiator_group_add_initiators", 00:05:52.799 "iscsi_create_initiator_group", 00:05:52.799 "iscsi_get_initiator_groups", 00:05:52.799 "nvmf_set_crdt", 00:05:52.799 "nvmf_set_config", 00:05:52.799 "nvmf_set_max_subsystems", 00:05:52.799 "nvmf_stop_mdns_prr", 00:05:52.799 "nvmf_publish_mdns_prr", 00:05:52.799 "nvmf_subsystem_get_listeners", 00:05:52.799 "nvmf_subsystem_get_qpairs", 00:05:52.799 "nvmf_subsystem_get_controllers", 00:05:52.799 "nvmf_get_stats", 00:05:52.799 "nvmf_get_transports", 00:05:52.799 "nvmf_create_transport", 00:05:52.799 "nvmf_get_targets", 00:05:52.799 "nvmf_delete_target", 00:05:52.799 "nvmf_create_target", 00:05:52.799 "nvmf_subsystem_allow_any_host", 00:05:52.799 "nvmf_subsystem_remove_host", 00:05:52.799 "nvmf_subsystem_add_host", 00:05:52.799 "nvmf_ns_remove_host", 00:05:52.799 "nvmf_ns_add_host", 00:05:52.799 "nvmf_subsystem_remove_ns", 00:05:52.799 "nvmf_subsystem_add_ns", 00:05:52.799 "nvmf_subsystem_listener_set_ana_state", 00:05:52.799 "nvmf_discovery_get_referrals", 00:05:52.799 "nvmf_discovery_remove_referral", 00:05:52.799 "nvmf_discovery_add_referral", 00:05:52.799 "nvmf_subsystem_remove_listener", 00:05:52.799 "nvmf_subsystem_add_listener", 00:05:52.799 "nvmf_delete_subsystem", 00:05:52.799 "nvmf_create_subsystem", 00:05:52.799 "nvmf_get_subsystems", 00:05:52.799 "env_dpdk_get_mem_stats", 00:05:52.799 "nbd_get_disks", 00:05:52.799 "nbd_stop_disk", 00:05:52.799 "nbd_start_disk", 00:05:52.799 "ublk_recover_disk", 00:05:52.799 "ublk_get_disks", 00:05:52.799 "ublk_stop_disk", 00:05:52.799 "ublk_start_disk", 00:05:52.799 "ublk_destroy_target", 00:05:52.799 "ublk_create_target", 00:05:52.799 "virtio_blk_create_transport", 00:05:52.799 "virtio_blk_get_transports", 00:05:52.799 "vhost_controller_set_coalescing", 00:05:52.799 "vhost_get_controllers", 00:05:52.799 "vhost_delete_controller", 00:05:52.799 "vhost_create_blk_controller", 00:05:52.799 "vhost_scsi_controller_remove_target", 00:05:52.799 "vhost_scsi_controller_add_target", 00:05:52.799 "vhost_start_scsi_controller", 00:05:52.799 "vhost_create_scsi_controller", 00:05:52.799 "thread_set_cpumask", 00:05:52.799 "framework_get_governor", 00:05:52.799 "framework_get_scheduler", 00:05:52.799 "framework_set_scheduler", 00:05:52.799 "framework_get_reactors", 00:05:52.799 "thread_get_io_channels", 00:05:52.799 "thread_get_pollers", 00:05:52.799 "thread_get_stats", 00:05:52.799 "framework_monitor_context_switch", 00:05:52.799 "spdk_kill_instance", 00:05:52.799 "log_enable_timestamps", 00:05:52.799 "log_get_flags", 00:05:52.799 "log_clear_flag", 00:05:52.799 "log_set_flag", 00:05:52.799 "log_get_level", 00:05:52.799 "log_set_level", 00:05:52.799 "log_get_print_level", 00:05:52.799 "log_set_print_level", 00:05:52.799 "framework_enable_cpumask_locks", 00:05:52.799 "framework_disable_cpumask_locks", 00:05:52.799 "framework_wait_init", 00:05:52.799 "framework_start_init", 00:05:52.799 "scsi_get_devices", 00:05:52.799 "bdev_get_histogram", 00:05:52.799 "bdev_enable_histogram", 00:05:52.799 "bdev_set_qos_limit", 00:05:52.799 "bdev_set_qd_sampling_period", 00:05:52.799 "bdev_get_bdevs", 00:05:52.799 "bdev_reset_iostat", 00:05:52.799 "bdev_get_iostat", 00:05:52.799 "bdev_examine", 00:05:52.799 "bdev_wait_for_examine", 00:05:52.799 "bdev_set_options", 00:05:52.799 "notify_get_notifications", 00:05:52.799 "notify_get_types", 00:05:52.799 "accel_get_stats", 00:05:52.799 "accel_set_options", 00:05:52.799 "accel_set_driver", 00:05:52.799 "accel_crypto_key_destroy", 00:05:52.799 "accel_crypto_keys_get", 00:05:52.799 "accel_crypto_key_create", 00:05:52.799 "accel_assign_opc", 00:05:52.799 "accel_get_module_info", 00:05:52.799 "accel_get_opc_assignments", 00:05:52.799 "vmd_rescan", 00:05:52.799 "vmd_remove_device", 00:05:52.799 "vmd_enable", 00:05:52.799 "sock_get_default_impl", 00:05:52.799 "sock_set_default_impl", 00:05:52.799 "sock_impl_set_options", 00:05:52.799 "sock_impl_get_options", 00:05:52.799 "iobuf_get_stats", 00:05:52.800 "iobuf_set_options", 00:05:52.800 "keyring_get_keys", 00:05:52.800 "framework_get_pci_devices", 00:05:52.800 "framework_get_config", 00:05:52.800 "framework_get_subsystems", 00:05:52.800 "vfu_tgt_set_base_path", 00:05:52.800 "trace_get_info", 00:05:52.800 "trace_get_tpoint_group_mask", 00:05:52.800 "trace_disable_tpoint_group", 00:05:52.800 "trace_enable_tpoint_group", 00:05:52.800 "trace_clear_tpoint_mask", 00:05:52.800 "trace_set_tpoint_mask", 00:05:52.800 "spdk_get_version", 00:05:52.800 "rpc_get_methods" 00:05:52.800 ] 00:05:52.800 20:19:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:52.800 20:19:45 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:52.800 20:19:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.061 20:19:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:53.061 20:19:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1111225 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 1111225 ']' 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 1111225 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1111225 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1111225' 00:05:53.061 killing process with pid 1111225 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 1111225 00:05:53.061 20:19:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 1111225 00:05:53.322 00:05:53.322 real 0m0.934s 00:05:53.322 user 0m1.584s 00:05:53.322 sys 0m0.375s 00:05:53.322 20:19:45 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:53.322 20:19:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:53.322 ************************************ 00:05:53.322 END TEST spdkcli_tcp 00:05:53.322 ************************************ 00:05:53.322 20:19:45 -- common/autotest_common.sh@1142 -- # return 0 00:05:53.322 20:19:45 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.322 20:19:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:53.322 20:19:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:53.322 20:19:45 -- common/autotest_common.sh@10 -- # set +x 00:05:53.322 ************************************ 00:05:53.322 START TEST dpdk_mem_utility 00:05:53.322 ************************************ 00:05:53.322 20:19:45 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:53.322 * Looking for test storage... 00:05:53.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:53.322 20:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:53.322 20:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1111335 00:05:53.322 20:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1111335 00:05:53.322 20:19:45 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 1111335 ']' 00:05:53.322 20:19:45 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:53.322 20:19:45 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.322 20:19:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:53.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:53.322 20:19:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.322 20:19:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:53.322 20:19:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:53.582 [2024-07-15 20:19:45.703910] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:53.582 [2024-07-15 20:19:45.703963] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111335 ] 00:05:53.582 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.582 [2024-07-15 20:19:45.771664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.582 [2024-07-15 20:19:45.839194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.154 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:54.154 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:54.154 20:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:54.154 20:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:54.154 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:54.154 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.154 { 00:05:54.154 "filename": "/tmp/spdk_mem_dump.txt" 00:05:54.154 } 00:05:54.154 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:54.154 20:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:54.154 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:54.154 1 heaps totaling size 814.000000 MiB 00:05:54.154 size: 814.000000 MiB heap id: 0 00:05:54.154 end heaps---------- 00:05:54.154 8 mempools totaling size 598.116089 MiB 00:05:54.154 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:54.154 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:54.154 size: 84.521057 MiB name: bdev_io_1111335 00:05:54.154 size: 51.011292 MiB name: evtpool_1111335 00:05:54.154 size: 50.003479 MiB name: msgpool_1111335 00:05:54.154 size: 21.763794 MiB name: PDU_Pool 00:05:54.154 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:54.154 size: 0.026123 MiB name: Session_Pool 00:05:54.154 end mempools------- 00:05:54.154 6 memzones totaling size 4.142822 MiB 00:05:54.154 size: 1.000366 MiB name: RG_ring_0_1111335 00:05:54.154 size: 1.000366 MiB name: RG_ring_1_1111335 00:05:54.154 size: 1.000366 MiB name: RG_ring_4_1111335 00:05:54.154 size: 1.000366 MiB name: RG_ring_5_1111335 00:05:54.154 size: 0.125366 MiB name: RG_ring_2_1111335 00:05:54.154 size: 0.015991 MiB name: RG_ring_3_1111335 00:05:54.154 end memzones------- 00:05:54.154 20:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:54.414 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:54.414 list of free elements. size: 12.519348 MiB 00:05:54.414 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:54.414 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:54.414 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:54.414 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:54.414 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:54.414 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:54.415 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:54.415 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:54.415 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:54.415 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:54.415 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:54.415 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:54.415 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:54.415 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:54.415 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:54.415 list of standard malloc elements. size: 199.218079 MiB 00:05:54.415 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:54.415 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:54.415 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:54.415 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:54.415 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:54.415 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:54.415 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:54.415 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:54.415 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:54.415 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:54.415 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:54.415 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:54.415 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:54.415 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:54.415 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:54.415 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:54.415 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:54.415 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:54.415 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:54.415 list of memzone associated elements. size: 602.262573 MiB 00:05:54.415 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:54.415 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:54.415 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:54.415 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:54.415 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:54.415 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_1111335_0 00:05:54.415 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:54.415 associated memzone info: size: 48.002930 MiB name: MP_evtpool_1111335_0 00:05:54.415 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:54.415 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1111335_0 00:05:54.415 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:54.415 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:54.415 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:54.415 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:54.415 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:54.415 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_1111335 00:05:54.415 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:54.415 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1111335 00:05:54.415 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:54.415 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1111335 00:05:54.415 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:54.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:54.415 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:54.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:54.415 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:54.415 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:54.415 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:54.415 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:54.415 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:54.415 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1111335 00:05:54.415 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:54.415 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1111335 00:05:54.415 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:54.415 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1111335 00:05:54.415 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:54.415 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1111335 00:05:54.415 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:54.415 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1111335 00:05:54.415 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:54.415 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:54.415 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:54.415 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:54.415 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:54.415 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:54.415 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:54.415 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1111335 00:05:54.415 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:54.415 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:54.415 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:54.415 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:54.415 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:54.416 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1111335 00:05:54.416 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:54.416 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:54.416 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:54.416 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1111335 00:05:54.416 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:54.416 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1111335 00:05:54.416 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:54.416 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:54.416 20:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:54.416 20:19:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1111335 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 1111335 ']' 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 1111335 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1111335 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1111335' 00:05:54.416 killing process with pid 1111335 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 1111335 00:05:54.416 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 1111335 00:05:54.677 00:05:54.677 real 0m1.267s 00:05:54.677 user 0m1.322s 00:05:54.677 sys 0m0.362s 00:05:54.677 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.677 20:19:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:54.677 ************************************ 00:05:54.677 END TEST dpdk_mem_utility 00:05:54.677 ************************************ 00:05:54.677 20:19:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:54.677 20:19:46 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.677 20:19:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.677 20:19:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.677 20:19:46 -- common/autotest_common.sh@10 -- # set +x 00:05:54.677 ************************************ 00:05:54.677 START TEST event 00:05:54.677 ************************************ 00:05:54.677 20:19:46 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:54.677 * Looking for test storage... 00:05:54.677 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:54.677 20:19:46 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:54.677 20:19:46 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:54.677 20:19:46 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.677 20:19:46 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:54.677 20:19:46 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.677 20:19:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:54.677 ************************************ 00:05:54.677 START TEST event_perf 00:05:54.677 ************************************ 00:05:54.677 20:19:47 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:54.677 Running I/O for 1 seconds...[2024-07-15 20:19:47.046616] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:54.677 [2024-07-15 20:19:47.046724] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1111721 ] 00:05:54.937 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.937 [2024-07-15 20:19:47.117971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:54.937 [2024-07-15 20:19:47.189818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.937 [2024-07-15 20:19:47.189932] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.937 [2024-07-15 20:19:47.190087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:54.937 Running I/O for 1 seconds...[2024-07-15 20:19:47.190088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:55.878 00:05:55.878 lcore 0: 181268 00:05:55.878 lcore 1: 181266 00:05:55.878 lcore 2: 181267 00:05:55.878 lcore 3: 181269 00:05:55.878 done. 00:05:55.878 00:05:55.878 real 0m1.217s 00:05:55.878 user 0m4.131s 00:05:55.878 sys 0m0.081s 00:05:55.878 20:19:48 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:55.878 20:19:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:55.878 ************************************ 00:05:55.878 END TEST event_perf 00:05:55.878 ************************************ 00:05:56.138 20:19:48 event -- common/autotest_common.sh@1142 -- # return 0 00:05:56.138 20:19:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.138 20:19:48 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:56.138 20:19:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.138 20:19:48 event -- common/autotest_common.sh@10 -- # set +x 00:05:56.138 ************************************ 00:05:56.138 START TEST event_reactor 00:05:56.138 ************************************ 00:05:56.138 20:19:48 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:56.138 [2024-07-15 20:19:48.339031] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:56.138 [2024-07-15 20:19:48.339142] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112083 ] 00:05:56.138 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.138 [2024-07-15 20:19:48.413906] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:56.138 [2024-07-15 20:19:48.477163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:57.523 test_start 00:05:57.523 oneshot 00:05:57.523 tick 100 00:05:57.523 tick 100 00:05:57.523 tick 250 00:05:57.523 tick 100 00:05:57.523 tick 100 00:05:57.523 tick 100 00:05:57.523 tick 250 00:05:57.523 tick 500 00:05:57.523 tick 100 00:05:57.523 tick 100 00:05:57.523 tick 250 00:05:57.523 tick 100 00:05:57.523 tick 100 00:05:57.523 test_end 00:05:57.523 00:05:57.523 real 0m1.213s 00:05:57.523 user 0m1.134s 00:05:57.523 sys 0m0.075s 00:05:57.523 20:19:49 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.523 20:19:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:57.523 ************************************ 00:05:57.523 END TEST event_reactor 00:05:57.523 ************************************ 00:05:57.523 20:19:49 event -- common/autotest_common.sh@1142 -- # return 0 00:05:57.523 20:19:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.523 20:19:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:57.523 20:19:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.523 20:19:49 event -- common/autotest_common.sh@10 -- # set +x 00:05:57.523 ************************************ 00:05:57.523 START TEST event_reactor_perf 00:05:57.523 ************************************ 00:05:57.523 20:19:49 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:57.523 [2024-07-15 20:19:49.628045] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:57.523 [2024-07-15 20:19:49.628142] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112373 ] 00:05:57.523 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.523 [2024-07-15 20:19:49.700864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:57.523 [2024-07-15 20:19:49.770841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.465 test_start 00:05:58.465 test_end 00:05:58.465 Performance: 363270 events per second 00:05:58.465 00:05:58.465 real 0m1.219s 00:05:58.465 user 0m1.135s 00:05:58.465 sys 0m0.079s 00:05:58.465 20:19:50 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.465 20:19:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:58.465 ************************************ 00:05:58.465 END TEST event_reactor_perf 00:05:58.465 ************************************ 00:05:58.727 20:19:50 event -- common/autotest_common.sh@1142 -- # return 0 00:05:58.727 20:19:50 event -- event/event.sh@49 -- # uname -s 00:05:58.727 20:19:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:58.727 20:19:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.727 20:19:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:58.727 20:19:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:58.727 20:19:50 event -- common/autotest_common.sh@10 -- # set +x 00:05:58.727 ************************************ 00:05:58.727 START TEST event_scheduler 00:05:58.727 ************************************ 00:05:58.727 20:19:50 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:58.727 * Looking for test storage... 00:05:58.727 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:58.727 20:19:50 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:58.727 20:19:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1112585 00:05:58.727 20:19:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:58.727 20:19:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:58.727 20:19:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1112585 00:05:58.727 20:19:51 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 1112585 ']' 00:05:58.727 20:19:51 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.727 20:19:51 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.727 20:19:51 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.727 20:19:51 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.727 20:19:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:58.727 [2024-07-15 20:19:51.050964] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:05:58.727 [2024-07-15 20:19:51.051027] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1112585 ] 00:05:58.727 EAL: No free 2048 kB hugepages reported on node 1 00:05:58.988 [2024-07-15 20:19:51.111714] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:58.988 [2024-07-15 20:19:51.178506] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:58.988 [2024-07-15 20:19:51.178667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:58.988 [2024-07-15 20:19:51.178786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:58.988 [2024-07-15 20:19:51.178787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:59.560 20:19:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.560 [2024-07-15 20:19:51.840949] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:59.560 [2024-07-15 20:19:51.840962] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:59.560 [2024-07-15 20:19:51.840970] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:59.560 [2024-07-15 20:19:51.840974] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:59.560 [2024-07-15 20:19:51.840978] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.560 20:19:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.560 [2024-07-15 20:19:51.895386] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.560 20:19:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.560 20:19:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:59.560 ************************************ 00:05:59.560 START TEST scheduler_create_thread 00:05:59.560 ************************************ 00:05:59.560 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:59.560 20:19:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:59.560 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.560 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.819 2 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.819 3 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.819 4 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.819 5 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.819 6 00:05:59.819 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.820 20:19:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:59.820 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.820 20:19:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.820 7 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.820 8 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.820 9 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:59.820 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.390 10 00:06:00.390 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:00.390 20:19:52 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:00.390 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:00.390 20:19:52 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:01.776 20:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:01.776 20:19:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:01.776 20:19:53 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:01.776 20:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:01.776 20:19:53 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.346 20:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.346 20:19:54 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:02.346 20:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.346 20:19:54 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.285 20:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.285 20:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:03.285 20:19:55 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:03.285 20:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:03.285 20:19:55 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.855 20:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:03.855 00:06:03.855 real 0m4.223s 00:06:03.855 user 0m0.020s 00:06:03.855 sys 0m0.011s 00:06:03.855 20:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.855 20:19:56 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:03.855 ************************************ 00:06:03.855 END TEST scheduler_create_thread 00:06:03.855 ************************************ 00:06:03.855 20:19:56 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:03.855 20:19:56 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:03.855 20:19:56 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1112585 00:06:03.855 20:19:56 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 1112585 ']' 00:06:03.855 20:19:56 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 1112585 00:06:03.855 20:19:56 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:03.855 20:19:56 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:03.855 20:19:56 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1112585 00:06:04.115 20:19:56 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:04.115 20:19:56 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:04.115 20:19:56 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1112585' 00:06:04.115 killing process with pid 1112585 00:06:04.115 20:19:56 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 1112585 00:06:04.115 20:19:56 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 1112585 00:06:04.115 [2024-07-15 20:19:56.436589] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:04.375 00:06:04.375 real 0m5.705s 00:06:04.375 user 0m12.740s 00:06:04.375 sys 0m0.351s 00:06:04.375 20:19:56 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.375 20:19:56 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:04.375 ************************************ 00:06:04.375 END TEST event_scheduler 00:06:04.375 ************************************ 00:06:04.375 20:19:56 event -- common/autotest_common.sh@1142 -- # return 0 00:06:04.375 20:19:56 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:04.375 20:19:56 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:04.375 20:19:56 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:04.375 20:19:56 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.375 20:19:56 event -- common/autotest_common.sh@10 -- # set +x 00:06:04.375 ************************************ 00:06:04.375 START TEST app_repeat 00:06:04.375 ************************************ 00:06:04.375 20:19:56 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1113877 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1113877' 00:06:04.375 Process app_repeat pid: 1113877 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:04.375 spdk_app_start Round 0 00:06:04.375 20:19:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1113877 /var/tmp/spdk-nbd.sock 00:06:04.375 20:19:56 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1113877 ']' 00:06:04.375 20:19:56 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.375 20:19:56 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:04.375 20:19:56 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.375 20:19:56 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:04.375 20:19:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.375 [2024-07-15 20:19:56.709586] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:04.375 [2024-07-15 20:19:56.709644] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1113877 ] 00:06:04.375 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.635 [2024-07-15 20:19:56.778168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:04.635 [2024-07-15 20:19:56.848021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:04.635 [2024-07-15 20:19:56.848024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.206 20:19:57 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:05.206 20:19:57 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:05.206 20:19:57 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.465 Malloc0 00:06:05.465 20:19:57 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:05.465 Malloc1 00:06:05.465 20:19:57 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:05.465 20:19:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.466 20:19:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.726 /dev/nbd0 00:06:05.726 20:19:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.726 20:19:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.726 20:19:57 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:05.726 20:19:57 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.726 20:19:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.726 20:19:57 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.726 20:19:57 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:05.726 20:19:57 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.726 1+0 records in 00:06:05.726 1+0 records out 00:06:05.726 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00029224 s, 14.0 MB/s 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.726 20:19:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.726 20:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.726 20:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.726 20:19:58 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.986 /dev/nbd1 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.986 1+0 records in 00:06:05.986 1+0 records out 00:06:05.986 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194571 s, 21.1 MB/s 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:05.986 20:19:58 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.986 { 00:06:05.986 "nbd_device": "/dev/nbd0", 00:06:05.986 "bdev_name": "Malloc0" 00:06:05.986 }, 00:06:05.986 { 00:06:05.986 "nbd_device": "/dev/nbd1", 00:06:05.986 "bdev_name": "Malloc1" 00:06:05.986 } 00:06:05.986 ]' 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.986 { 00:06:05.986 "nbd_device": "/dev/nbd0", 00:06:05.986 "bdev_name": "Malloc0" 00:06:05.986 }, 00:06:05.986 { 00:06:05.986 "nbd_device": "/dev/nbd1", 00:06:05.986 "bdev_name": "Malloc1" 00:06:05.986 } 00:06:05.986 ]' 00:06:05.986 20:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:06.248 /dev/nbd1' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:06.248 /dev/nbd1' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:06.248 256+0 records in 00:06:06.248 256+0 records out 00:06:06.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.011825 s, 88.7 MB/s 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:06.248 256+0 records in 00:06:06.248 256+0 records out 00:06:06.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0155793 s, 67.3 MB/s 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:06.248 256+0 records in 00:06:06.248 256+0 records out 00:06:06.248 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0168885 s, 62.1 MB/s 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.248 20:19:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.509 20:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.769 20:19:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.769 20:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.769 20:19:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.769 20:19:59 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.769 20:19:59 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:07.029 20:19:59 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:07.029 [2024-07-15 20:19:59.311690] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:07.029 [2024-07-15 20:19:59.376589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:07.029 [2024-07-15 20:19:59.376593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.029 [2024-07-15 20:19:59.408123] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:07.029 [2024-07-15 20:19:59.408160] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.329 20:20:02 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:10.329 20:20:02 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:10.329 spdk_app_start Round 1 00:06:10.329 20:20:02 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1113877 /var/tmp/spdk-nbd.sock 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1113877 ']' 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.329 20:20:02 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:10.329 20:20:02 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.329 Malloc0 00:06:10.329 20:20:02 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:10.329 Malloc1 00:06:10.329 20:20:02 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.329 20:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:10.591 /dev/nbd0 00:06:10.591 20:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:10.591 20:20:02 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.591 1+0 records in 00:06:10.591 1+0 records out 00:06:10.591 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278537 s, 14.7 MB/s 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.591 20:20:02 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.591 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.591 20:20:02 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.591 20:20:02 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:10.852 /dev/nbd1 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:10.852 1+0 records in 00:06:10.852 1+0 records out 00:06:10.852 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000285135 s, 14.4 MB/s 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:10.852 20:20:03 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:10.852 { 00:06:10.852 "nbd_device": "/dev/nbd0", 00:06:10.852 "bdev_name": "Malloc0" 00:06:10.852 }, 00:06:10.852 { 00:06:10.852 "nbd_device": "/dev/nbd1", 00:06:10.852 "bdev_name": "Malloc1" 00:06:10.852 } 00:06:10.852 ]' 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:10.852 { 00:06:10.852 "nbd_device": "/dev/nbd0", 00:06:10.852 "bdev_name": "Malloc0" 00:06:10.852 }, 00:06:10.852 { 00:06:10.852 "nbd_device": "/dev/nbd1", 00:06:10.852 "bdev_name": "Malloc1" 00:06:10.852 } 00:06:10.852 ]' 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:10.852 /dev/nbd1' 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:10.852 /dev/nbd1' 00:06:10.852 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:11.117 256+0 records in 00:06:11.117 256+0 records out 00:06:11.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0115506 s, 90.8 MB/s 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:11.117 256+0 records in 00:06:11.117 256+0 records out 00:06:11.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0159267 s, 65.8 MB/s 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:11.117 256+0 records in 00:06:11.117 256+0 records out 00:06:11.117 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0167637 s, 62.6 MB/s 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:11.117 20:20:03 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:11.402 20:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:11.674 20:20:03 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:11.674 20:20:03 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:11.674 20:20:04 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:11.934 [2024-07-15 20:20:04.156674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:11.934 [2024-07-15 20:20:04.220490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:11.934 [2024-07-15 20:20:04.220493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.934 [2024-07-15 20:20:04.252824] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:11.934 [2024-07-15 20:20:04.252860] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:15.238 20:20:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:15.238 20:20:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:15.238 spdk_app_start Round 2 00:06:15.238 20:20:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1113877 /var/tmp/spdk-nbd.sock 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1113877 ']' 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:15.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:15.238 20:20:07 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:15.238 20:20:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.238 Malloc0 00:06:15.238 20:20:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:15.238 Malloc1 00:06:15.238 20:20:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:15.238 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.239 20:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:15.500 /dev/nbd0 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.500 1+0 records in 00:06:15.500 1+0 records out 00:06:15.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000212843 s, 19.2 MB/s 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:15.500 /dev/nbd1 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:15.500 1+0 records in 00:06:15.500 1+0 records out 00:06:15.500 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000278056 s, 14.7 MB/s 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:15.500 20:20:07 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:15.500 20:20:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:15.762 { 00:06:15.762 "nbd_device": "/dev/nbd0", 00:06:15.762 "bdev_name": "Malloc0" 00:06:15.762 }, 00:06:15.762 { 00:06:15.762 "nbd_device": "/dev/nbd1", 00:06:15.762 "bdev_name": "Malloc1" 00:06:15.762 } 00:06:15.762 ]' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:15.762 { 00:06:15.762 "nbd_device": "/dev/nbd0", 00:06:15.762 "bdev_name": "Malloc0" 00:06:15.762 }, 00:06:15.762 { 00:06:15.762 "nbd_device": "/dev/nbd1", 00:06:15.762 "bdev_name": "Malloc1" 00:06:15.762 } 00:06:15.762 ]' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:15.762 /dev/nbd1' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:15.762 /dev/nbd1' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:15.762 256+0 records in 00:06:15.762 256+0 records out 00:06:15.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0116797 s, 89.8 MB/s 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:15.762 256+0 records in 00:06:15.762 256+0 records out 00:06:15.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0175714 s, 59.7 MB/s 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:15.762 256+0 records in 00:06:15.762 256+0 records out 00:06:15.762 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0172477 s, 60.8 MB/s 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:15.762 20:20:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:16.024 20:20:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:16.284 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:16.546 20:20:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:16.546 20:20:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:16.546 20:20:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:16.806 [2024-07-15 20:20:08.986387] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.806 [2024-07-15 20:20:09.050795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.806 [2024-07-15 20:20:09.050797] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.806 [2024-07-15 20:20:09.082178] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:16.806 [2024-07-15 20:20:09.082216] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:20.108 20:20:11 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1113877 /var/tmp/spdk-nbd.sock 00:06:20.108 20:20:11 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 1113877 ']' 00:06:20.108 20:20:11 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:20.108 20:20:11 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.108 20:20:11 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:20.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:20.108 20:20:11 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.108 20:20:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:20.108 20:20:12 event.app_repeat -- event/event.sh@39 -- # killprocess 1113877 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 1113877 ']' 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 1113877 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1113877 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1113877' 00:06:20.108 killing process with pid 1113877 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@967 -- # kill 1113877 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@972 -- # wait 1113877 00:06:20.108 spdk_app_start is called in Round 0. 00:06:20.108 Shutdown signal received, stop current app iteration 00:06:20.108 Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 reinitialization... 00:06:20.108 spdk_app_start is called in Round 1. 00:06:20.108 Shutdown signal received, stop current app iteration 00:06:20.108 Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 reinitialization... 00:06:20.108 spdk_app_start is called in Round 2. 00:06:20.108 Shutdown signal received, stop current app iteration 00:06:20.108 Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 reinitialization... 00:06:20.108 spdk_app_start is called in Round 3. 00:06:20.108 Shutdown signal received, stop current app iteration 00:06:20.108 20:20:12 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:20.108 20:20:12 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:20.108 00:06:20.108 real 0m15.503s 00:06:20.108 user 0m33.465s 00:06:20.108 sys 0m2.112s 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:20.108 20:20:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:20.108 ************************************ 00:06:20.108 END TEST app_repeat 00:06:20.108 ************************************ 00:06:20.108 20:20:12 event -- common/autotest_common.sh@1142 -- # return 0 00:06:20.108 20:20:12 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:20.108 20:20:12 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.108 20:20:12 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.108 20:20:12 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.108 20:20:12 event -- common/autotest_common.sh@10 -- # set +x 00:06:20.108 ************************************ 00:06:20.108 START TEST cpu_locks 00:06:20.108 ************************************ 00:06:20.108 20:20:12 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:20.108 * Looking for test storage... 00:06:20.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:20.108 20:20:12 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:20.108 20:20:12 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:20.108 20:20:12 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:20.108 20:20:12 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:20.108 20:20:12 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:20.108 20:20:12 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:20.108 20:20:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.108 ************************************ 00:06:20.108 START TEST default_locks 00:06:20.108 ************************************ 00:06:20.108 20:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:20.108 20:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1117132 00:06:20.108 20:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1117132 00:06:20.108 20:20:12 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:20.108 20:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1117132 ']' 00:06:20.109 20:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:20.109 20:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:20.109 20:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:20.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:20.109 20:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:20.109 20:20:12 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:20.109 [2024-07-15 20:20:12.440689] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:20.109 [2024-07-15 20:20:12.440743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117132 ] 00:06:20.109 EAL: No free 2048 kB hugepages reported on node 1 00:06:20.369 [2024-07-15 20:20:12.509447] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:20.369 [2024-07-15 20:20:12.583056] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:20.941 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:20.941 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:20.941 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1117132 00:06:20.941 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1117132 00:06:20.941 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:21.201 lslocks: write error 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1117132 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 1117132 ']' 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 1117132 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117132 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117132' 00:06:21.201 killing process with pid 1117132 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 1117132 00:06:21.201 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 1117132 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1117132 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1117132 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 1117132 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 1117132 ']' 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1117132) - No such process 00:06:21.462 ERROR: process (pid: 1117132) is no longer running 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:21.462 00:06:21.462 real 0m1.225s 00:06:21.462 user 0m1.311s 00:06:21.462 sys 0m0.368s 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.462 20:20:13 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.462 ************************************ 00:06:21.462 END TEST default_locks 00:06:21.462 ************************************ 00:06:21.462 20:20:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:21.462 20:20:13 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:21.462 20:20:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:21.462 20:20:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.462 20:20:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.462 ************************************ 00:06:21.462 START TEST default_locks_via_rpc 00:06:21.462 ************************************ 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1117491 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1117491 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1117491 ']' 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.462 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:21.462 20:20:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.462 [2024-07-15 20:20:13.734785] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:21.462 [2024-07-15 20:20:13.734834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117491 ] 00:06:21.462 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.462 [2024-07-15 20:20:13.800256] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.722 [2024-07-15 20:20:13.866046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1117491 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1117491 00:06:22.294 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:22.555 20:20:14 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1117491 00:06:22.555 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 1117491 ']' 00:06:22.555 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 1117491 00:06:22.555 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:22.555 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:22.555 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117491 00:06:22.815 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:22.815 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:22.815 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117491' 00:06:22.815 killing process with pid 1117491 00:06:22.815 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 1117491 00:06:22.815 20:20:14 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 1117491 00:06:22.815 00:06:22.815 real 0m1.486s 00:06:22.815 user 0m1.557s 00:06:22.815 sys 0m0.494s 00:06:22.815 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.815 20:20:15 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.815 ************************************ 00:06:22.815 END TEST default_locks_via_rpc 00:06:22.815 ************************************ 00:06:23.075 20:20:15 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:23.075 20:20:15 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:23.075 20:20:15 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:23.075 20:20:15 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:23.075 20:20:15 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.075 ************************************ 00:06:23.075 START TEST non_locking_app_on_locked_coremask 00:06:23.075 ************************************ 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1117859 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1117859 /var/tmp/spdk.sock 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1117859 ']' 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:23.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.075 20:20:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:23.075 [2024-07-15 20:20:15.307952] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:23.075 [2024-07-15 20:20:15.308008] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117859 ] 00:06:23.075 EAL: No free 2048 kB hugepages reported on node 1 00:06:23.075 [2024-07-15 20:20:15.376722] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.075 [2024-07-15 20:20:15.449668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1117891 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1117891 /var/tmp/spdk2.sock 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1117891 ']' 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:24.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:24.015 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:24.015 [2024-07-15 20:20:16.089840] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:24.015 [2024-07-15 20:20:16.089893] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1117891 ] 00:06:24.015 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.015 [2024-07-15 20:20:16.185740] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:24.015 [2024-07-15 20:20:16.185770] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.015 [2024-07-15 20:20:16.317805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.587 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.587 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:24.587 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1117859 00:06:24.587 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1117859 00:06:24.587 20:20:16 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:25.157 lslocks: write error 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1117859 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1117859 ']' 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1117859 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117859 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117859' 00:06:25.157 killing process with pid 1117859 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1117859 00:06:25.157 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1117859 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1117891 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1117891 ']' 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1117891 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1117891 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1117891' 00:06:25.727 killing process with pid 1117891 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1117891 00:06:25.727 20:20:17 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1117891 00:06:25.988 00:06:25.988 real 0m2.897s 00:06:25.988 user 0m3.153s 00:06:25.988 sys 0m0.847s 00:06:25.988 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.988 20:20:18 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.988 ************************************ 00:06:25.988 END TEST non_locking_app_on_locked_coremask 00:06:25.988 ************************************ 00:06:25.988 20:20:18 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:25.988 20:20:18 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:25.988 20:20:18 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:25.988 20:20:18 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.988 20:20:18 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:25.988 ************************************ 00:06:25.988 START TEST locking_app_on_unlocked_coremask 00:06:25.988 ************************************ 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1118484 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1118484 /var/tmp/spdk.sock 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1118484 ']' 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:25.988 20:20:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:25.988 [2024-07-15 20:20:18.268868] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:25.988 [2024-07-15 20:20:18.268930] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118484 ] 00:06:25.988 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.988 [2024-07-15 20:20:18.340097] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:25.988 [2024-07-15 20:20:18.340134] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.249 [2024-07-15 20:20:18.411094] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1118582 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1118582 /var/tmp/spdk2.sock 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1118582 ']' 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:26.821 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.821 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:26.821 [2024-07-15 20:20:19.096783] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:26.821 [2024-07-15 20:20:19.096836] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118582 ] 00:06:26.821 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.821 [2024-07-15 20:20:19.193625] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.082 [2024-07-15 20:20:19.327221] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.655 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.655 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:27.655 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1118582 00:06:27.655 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1118582 00:06:27.655 20:20:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:28.225 lslocks: write error 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1118484 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1118484 ']' 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1118484 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118484 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118484' 00:06:28.225 killing process with pid 1118484 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1118484 00:06:28.225 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1118484 00:06:28.486 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1118582 00:06:28.486 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1118582 ']' 00:06:28.486 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 1118582 00:06:28.486 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:28.486 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:28.486 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118582 00:06:28.747 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:28.747 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:28.747 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118582' 00:06:28.747 killing process with pid 1118582 00:06:28.747 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 1118582 00:06:28.747 20:20:20 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 1118582 00:06:28.747 00:06:28.747 real 0m2.909s 00:06:28.747 user 0m3.167s 00:06:28.747 sys 0m0.881s 00:06:28.747 20:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.747 20:20:21 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:28.747 ************************************ 00:06:28.747 END TEST locking_app_on_unlocked_coremask 00:06:28.747 ************************************ 00:06:29.008 20:20:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:29.008 20:20:21 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:29.008 20:20:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:29.008 20:20:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.008 20:20:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:29.008 ************************************ 00:06:29.008 START TEST locking_app_on_locked_coremask 00:06:29.008 ************************************ 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1118982 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1118982 /var/tmp/spdk.sock 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1118982 ']' 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:29.008 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.008 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:29.008 [2024-07-15 20:20:21.244540] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:29.008 [2024-07-15 20:20:21.244590] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1118982 ] 00:06:29.008 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.008 [2024-07-15 20:20:21.311462] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.008 [2024-07-15 20:20:21.380949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1119288 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1119288 /var/tmp/spdk2.sock 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1119288 /var/tmp/spdk2.sock 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:29.949 20:20:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:29.949 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1119288 /var/tmp/spdk2.sock 00:06:29.949 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 1119288 ']' 00:06:29.949 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:29.949 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.949 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:29.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:29.949 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.949 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:29.949 [2024-07-15 20:20:22.052941] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:29.949 [2024-07-15 20:20:22.052995] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119288 ] 00:06:29.949 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.949 [2024-07-15 20:20:22.150769] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1118982 has claimed it. 00:06:29.949 [2024-07-15 20:20:22.150806] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:30.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1119288) - No such process 00:06:30.520 ERROR: process (pid: 1119288) is no longer running 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1118982 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1118982 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:30.520 lslocks: write error 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1118982 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 1118982 ']' 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 1118982 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:30.520 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1118982 00:06:30.781 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:30.781 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:30.781 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1118982' 00:06:30.781 killing process with pid 1118982 00:06:30.781 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 1118982 00:06:30.781 20:20:22 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 1118982 00:06:30.781 00:06:30.781 real 0m1.950s 00:06:30.781 user 0m2.154s 00:06:30.781 sys 0m0.525s 00:06:30.781 20:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:30.781 20:20:23 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:30.781 ************************************ 00:06:30.781 END TEST locking_app_on_locked_coremask 00:06:30.781 ************************************ 00:06:31.042 20:20:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:31.042 20:20:23 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:31.042 20:20:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:31.042 20:20:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.042 20:20:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 ************************************ 00:06:31.042 START TEST locking_overlapped_coremask 00:06:31.042 ************************************ 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1119517 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1119517 /var/tmp/spdk.sock 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1119517 ']' 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:31.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.042 20:20:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:31.042 [2024-07-15 20:20:23.263261] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:31.042 [2024-07-15 20:20:23.263311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119517 ] 00:06:31.042 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.042 [2024-07-15 20:20:23.327867] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:31.042 [2024-07-15 20:20:23.394202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:31.042 [2024-07-15 20:20:23.394223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:31.042 [2024-07-15 20:20:23.394226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1119669 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1119669 /var/tmp/spdk2.sock 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 1119669 /var/tmp/spdk2.sock 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 1119669 /var/tmp/spdk2.sock 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 1119669 ']' 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:31.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:31.985 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:31.985 [2024-07-15 20:20:24.085195] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:31.985 [2024-07-15 20:20:24.085265] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119669 ] 00:06:31.985 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.985 [2024-07-15 20:20:24.166278] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1119517 has claimed it. 00:06:31.985 [2024-07-15 20:20:24.166308] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:32.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (1119669) - No such process 00:06:32.557 ERROR: process (pid: 1119669) is no longer running 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1119517 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 1119517 ']' 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 1119517 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119517 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119517' 00:06:32.557 killing process with pid 1119517 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 1119517 00:06:32.557 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 1119517 00:06:32.817 00:06:32.817 real 0m1.749s 00:06:32.817 user 0m4.944s 00:06:32.817 sys 0m0.367s 00:06:32.817 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.817 20:20:24 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 ************************************ 00:06:32.817 END TEST locking_overlapped_coremask 00:06:32.817 ************************************ 00:06:32.817 20:20:24 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:32.817 20:20:24 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:32.817 20:20:24 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.817 20:20:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.817 20:20:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:32.817 ************************************ 00:06:32.817 START TEST locking_overlapped_coremask_via_rpc 00:06:32.817 ************************************ 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1119968 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1119968 /var/tmp/spdk.sock 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1119968 ']' 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.817 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.818 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.818 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.818 [2024-07-15 20:20:25.097674] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:32.818 [2024-07-15 20:20:25.097726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1119968 ] 00:06:32.818 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.818 [2024-07-15 20:20:25.164174] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:32.818 [2024-07-15 20:20:25.164202] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.077 [2024-07-15 20:20:25.230179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:33.077 [2024-07-15 20:20:25.230312] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.077 [2024-07-15 20:20:25.230484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1120034 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1120034 /var/tmp/spdk2.sock 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1120034 ']' 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:33.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:33.646 20:20:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.646 [2024-07-15 20:20:25.885593] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:33.646 [2024-07-15 20:20:25.885643] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120034 ] 00:06:33.646 EAL: No free 2048 kB hugepages reported on node 1 00:06:33.646 [2024-07-15 20:20:25.969428] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:33.646 [2024-07-15 20:20:25.969454] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:33.907 [2024-07-15 20:20:26.070759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:33.907 [2024-07-15 20:20:26.074351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:33.907 [2024-07-15 20:20:26.074353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.476 [2024-07-15 20:20:26.674294] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1119968 has claimed it. 00:06:34.476 request: 00:06:34.476 { 00:06:34.476 "method": "framework_enable_cpumask_locks", 00:06:34.476 "req_id": 1 00:06:34.476 } 00:06:34.476 Got JSON-RPC error response 00:06:34.476 response: 00:06:34.476 { 00:06:34.476 "code": -32603, 00:06:34.476 "message": "Failed to claim CPU core: 2" 00:06:34.476 } 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1119968 /var/tmp/spdk.sock 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1119968 ']' 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.476 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.476 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.477 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1120034 /var/tmp/spdk2.sock 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 1120034 ']' 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:34.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.737 20:20:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:34.737 00:06:34.737 real 0m1.991s 00:06:34.737 user 0m0.768s 00:06:34.737 sys 0m0.148s 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.737 20:20:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.737 ************************************ 00:06:34.737 END TEST locking_overlapped_coremask_via_rpc 00:06:34.737 ************************************ 00:06:34.737 20:20:27 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:34.737 20:20:27 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:34.737 20:20:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1119968 ]] 00:06:34.737 20:20:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1119968 00:06:34.737 20:20:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1119968 ']' 00:06:34.737 20:20:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1119968 00:06:34.737 20:20:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:34.737 20:20:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.737 20:20:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1119968 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1119968' 00:06:34.998 killing process with pid 1119968 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1119968 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1119968 00:06:34.998 20:20:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1120034 ]] 00:06:34.998 20:20:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1120034 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1120034 ']' 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1120034 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.998 20:20:27 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1120034 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1120034' 00:06:35.259 killing process with pid 1120034 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 1120034 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 1120034 00:06:35.259 20:20:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.259 20:20:27 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:35.259 20:20:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1119968 ]] 00:06:35.259 20:20:27 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1119968 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1119968 ']' 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1119968 00:06:35.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1119968) - No such process 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1119968 is not found' 00:06:35.259 Process with pid 1119968 is not found 00:06:35.259 20:20:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1120034 ]] 00:06:35.259 20:20:27 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1120034 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 1120034 ']' 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 1120034 00:06:35.259 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1120034) - No such process 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 1120034 is not found' 00:06:35.259 Process with pid 1120034 is not found 00:06:35.259 20:20:27 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:35.259 00:06:35.259 real 0m15.332s 00:06:35.259 user 0m26.563s 00:06:35.259 sys 0m4.484s 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.259 20:20:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:35.259 ************************************ 00:06:35.259 END TEST cpu_locks 00:06:35.259 ************************************ 00:06:35.259 20:20:27 event -- common/autotest_common.sh@1142 -- # return 0 00:06:35.259 00:06:35.259 real 0m40.727s 00:06:35.259 user 1m19.358s 00:06:35.259 sys 0m7.559s 00:06:35.259 20:20:27 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:35.259 20:20:27 event -- common/autotest_common.sh@10 -- # set +x 00:06:35.259 ************************************ 00:06:35.259 END TEST event 00:06:35.259 ************************************ 00:06:35.520 20:20:27 -- common/autotest_common.sh@1142 -- # return 0 00:06:35.520 20:20:27 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.520 20:20:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:35.520 20:20:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.520 20:20:27 -- common/autotest_common.sh@10 -- # set +x 00:06:35.520 ************************************ 00:06:35.520 START TEST thread 00:06:35.520 ************************************ 00:06:35.520 20:20:27 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:35.520 * Looking for test storage... 00:06:35.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:35.520 20:20:27 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.520 20:20:27 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:35.520 20:20:27 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:35.520 20:20:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:35.520 ************************************ 00:06:35.520 START TEST thread_poller_perf 00:06:35.520 ************************************ 00:06:35.520 20:20:27 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:35.520 [2024-07-15 20:20:27.853724] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:35.520 [2024-07-15 20:20:27.853824] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120475 ] 00:06:35.520 EAL: No free 2048 kB hugepages reported on node 1 00:06:35.780 [2024-07-15 20:20:27.929035] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:35.780 [2024-07-15 20:20:28.003892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.780 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:36.721 ====================================== 00:06:36.721 busy:2414459874 (cyc) 00:06:36.721 total_run_count: 288000 00:06:36.721 tsc_hz: 2400000000 (cyc) 00:06:36.721 ====================================== 00:06:36.721 poller_cost: 8383 (cyc), 3492 (nsec) 00:06:36.721 00:06:36.721 real 0m1.235s 00:06:36.721 user 0m1.142s 00:06:36.721 sys 0m0.088s 00:06:36.721 20:20:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.721 20:20:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.721 ************************************ 00:06:36.721 END TEST thread_poller_perf 00:06:36.721 ************************************ 00:06:36.981 20:20:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:36.981 20:20:29 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.981 20:20:29 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:36.981 20:20:29 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.981 20:20:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:36.981 ************************************ 00:06:36.981 START TEST thread_poller_perf 00:06:36.981 ************************************ 00:06:36.981 20:20:29 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:36.981 [2024-07-15 20:20:29.162204] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:36.981 [2024-07-15 20:20:29.162465] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1120829 ] 00:06:36.981 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.981 [2024-07-15 20:20:29.231669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.981 [2024-07-15 20:20:29.295852] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.981 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:38.367 ====================================== 00:06:38.367 busy:2402271738 (cyc) 00:06:38.367 total_run_count: 3805000 00:06:38.367 tsc_hz: 2400000000 (cyc) 00:06:38.367 ====================================== 00:06:38.367 poller_cost: 631 (cyc), 262 (nsec) 00:06:38.367 00:06:38.367 real 0m1.210s 00:06:38.367 user 0m1.131s 00:06:38.367 sys 0m0.075s 00:06:38.367 20:20:30 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.367 20:20:30 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:38.367 ************************************ 00:06:38.367 END TEST thread_poller_perf 00:06:38.367 ************************************ 00:06:38.367 20:20:30 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:38.367 20:20:30 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:38.367 00:06:38.367 real 0m2.695s 00:06:38.367 user 0m2.370s 00:06:38.367 sys 0m0.332s 00:06:38.367 20:20:30 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.367 20:20:30 thread -- common/autotest_common.sh@10 -- # set +x 00:06:38.367 ************************************ 00:06:38.367 END TEST thread 00:06:38.367 ************************************ 00:06:38.367 20:20:30 -- common/autotest_common.sh@1142 -- # return 0 00:06:38.367 20:20:30 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:38.367 20:20:30 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.367 20:20:30 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.367 20:20:30 -- common/autotest_common.sh@10 -- # set +x 00:06:38.367 ************************************ 00:06:38.367 START TEST accel 00:06:38.367 ************************************ 00:06:38.367 20:20:30 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:38.367 * Looking for test storage... 00:06:38.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:38.367 20:20:30 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:38.367 20:20:30 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:38.367 20:20:30 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:38.367 20:20:30 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=1121221 00:06:38.367 20:20:30 accel -- accel/accel.sh@63 -- # waitforlisten 1121221 00:06:38.367 20:20:30 accel -- common/autotest_common.sh@829 -- # '[' -z 1121221 ']' 00:06:38.367 20:20:30 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.367 20:20:30 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.367 20:20:30 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.367 20:20:30 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:38.367 20:20:30 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.367 20:20:30 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:38.367 20:20:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:38.367 20:20:30 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:38.367 20:20:30 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:38.367 20:20:30 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:38.367 20:20:30 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:38.367 20:20:30 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:38.367 20:20:30 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:38.367 20:20:30 accel -- accel/accel.sh@41 -- # jq -r . 00:06:38.367 [2024-07-15 20:20:30.623566] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:38.367 [2024-07-15 20:20:30.623637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121221 ] 00:06:38.367 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.367 [2024-07-15 20:20:30.694149] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.627 [2024-07-15 20:20:30.767858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.198 20:20:31 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.198 20:20:31 accel -- common/autotest_common.sh@862 -- # return 0 00:06:39.198 20:20:31 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:39.198 20:20:31 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:39.198 20:20:31 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:39.198 20:20:31 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:39.198 20:20:31 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:39.198 20:20:31 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:39.198 20:20:31 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:39.198 20:20:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.198 20:20:31 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:39.198 20:20:31 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:39.198 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.198 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.198 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.198 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.198 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.198 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.198 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.198 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # IFS== 00:06:39.199 20:20:31 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:39.199 20:20:31 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:39.199 20:20:31 accel -- accel/accel.sh@75 -- # killprocess 1121221 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@948 -- # '[' -z 1121221 ']' 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@952 -- # kill -0 1121221 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@953 -- # uname 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1121221 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1121221' 00:06:39.199 killing process with pid 1121221 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@967 -- # kill 1121221 00:06:39.199 20:20:31 accel -- common/autotest_common.sh@972 -- # wait 1121221 00:06:39.460 20:20:31 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:39.460 20:20:31 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:39.460 20:20:31 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:39.460 20:20:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.460 20:20:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.460 20:20:31 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:39.460 20:20:31 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:39.460 20:20:31 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.460 20:20:31 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:39.460 20:20:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:39.460 20:20:31 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:39.460 20:20:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:39.460 20:20:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:39.460 20:20:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:39.748 ************************************ 00:06:39.748 START TEST accel_missing_filename 00:06:39.748 ************************************ 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:39.748 20:20:31 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:39.748 20:20:31 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:39.748 [2024-07-15 20:20:31.882007] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:39.748 [2024-07-15 20:20:31.882108] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121566 ] 00:06:39.748 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.748 [2024-07-15 20:20:31.954997] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.748 [2024-07-15 20:20:32.027756] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.748 [2024-07-15 20:20:32.060278] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:39.748 [2024-07-15 20:20:32.097561] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:40.009 A filename is required. 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.009 00:06:40.009 real 0m0.301s 00:06:40.009 user 0m0.218s 00:06:40.009 sys 0m0.124s 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.009 20:20:32 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:40.009 ************************************ 00:06:40.009 END TEST accel_missing_filename 00:06:40.009 ************************************ 00:06:40.009 20:20:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.009 20:20:32 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.009 20:20:32 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:40.009 20:20:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.009 20:20:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.009 ************************************ 00:06:40.009 START TEST accel_compress_verify 00:06:40.009 ************************************ 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.009 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:40.009 20:20:32 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:40.009 [2024-07-15 20:20:32.257122] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:40.009 [2024-07-15 20:20:32.257217] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121630 ] 00:06:40.009 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.009 [2024-07-15 20:20:32.325995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.270 [2024-07-15 20:20:32.392794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.270 [2024-07-15 20:20:32.424613] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.270 [2024-07-15 20:20:32.461668] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:40.270 00:06:40.270 Compression does not support the verify option, aborting. 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.270 00:06:40.270 real 0m0.289s 00:06:40.270 user 0m0.223s 00:06:40.270 sys 0m0.109s 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.270 20:20:32 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 ************************************ 00:06:40.270 END TEST accel_compress_verify 00:06:40.270 ************************************ 00:06:40.270 20:20:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.270 20:20:32 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:40.270 20:20:32 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:40.270 20:20:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.270 20:20:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.270 ************************************ 00:06:40.270 START TEST accel_wrong_workload 00:06:40.270 ************************************ 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:40.270 20:20:32 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:40.270 Unsupported workload type: foobar 00:06:40.270 [2024-07-15 20:20:32.619330] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:40.270 accel_perf options: 00:06:40.270 [-h help message] 00:06:40.270 [-q queue depth per core] 00:06:40.270 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:40.270 [-T number of threads per core 00:06:40.270 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:40.270 [-t time in seconds] 00:06:40.270 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:40.270 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:40.270 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:40.270 [-l for compress/decompress workloads, name of uncompressed input file 00:06:40.270 [-S for crc32c workload, use this seed value (default 0) 00:06:40.270 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:40.270 [-f for fill workload, use this BYTE value (default 255) 00:06:40.270 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:40.270 [-y verify result if this switch is on] 00:06:40.270 [-a tasks to allocate per core (default: same value as -q)] 00:06:40.270 Can be used to spread operations across a wider range of memory. 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:40.270 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.271 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.271 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.271 00:06:40.271 real 0m0.037s 00:06:40.271 user 0m0.024s 00:06:40.271 sys 0m0.013s 00:06:40.271 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.271 20:20:32 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:40.271 ************************************ 00:06:40.271 END TEST accel_wrong_workload 00:06:40.271 ************************************ 00:06:40.271 Error: writing output failed: Broken pipe 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.532 20:20:32 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.532 ************************************ 00:06:40.532 START TEST accel_negative_buffers 00:06:40.532 ************************************ 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:40.532 20:20:32 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:40.532 -x option must be non-negative. 00:06:40.532 [2024-07-15 20:20:32.731560] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:40.532 accel_perf options: 00:06:40.532 [-h help message] 00:06:40.532 [-q queue depth per core] 00:06:40.532 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:40.532 [-T number of threads per core 00:06:40.532 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:40.532 [-t time in seconds] 00:06:40.532 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:40.532 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:40.532 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:40.532 [-l for compress/decompress workloads, name of uncompressed input file 00:06:40.532 [-S for crc32c workload, use this seed value (default 0) 00:06:40.532 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:40.532 [-f for fill workload, use this BYTE value (default 255) 00:06:40.532 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:40.532 [-y verify result if this switch is on] 00:06:40.532 [-a tasks to allocate per core (default: same value as -q)] 00:06:40.532 Can be used to spread operations across a wider range of memory. 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:40.532 00:06:40.532 real 0m0.036s 00:06:40.532 user 0m0.023s 00:06:40.532 sys 0m0.013s 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:40.532 20:20:32 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:40.532 ************************************ 00:06:40.532 END TEST accel_negative_buffers 00:06:40.532 ************************************ 00:06:40.532 Error: writing output failed: Broken pipe 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:40.532 20:20:32 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:40.532 20:20:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:40.532 ************************************ 00:06:40.532 START TEST accel_crc32c 00:06:40.532 ************************************ 00:06:40.532 20:20:32 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:40.532 20:20:32 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:40.532 [2024-07-15 20:20:32.843797] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:40.532 [2024-07-15 20:20:32.843885] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1121692 ] 00:06:40.532 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.793 [2024-07-15 20:20:32.912817] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.793 [2024-07-15 20:20:32.977661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.793 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:40.794 20:20:33 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:41.759 20:20:34 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:41.759 00:06:41.759 real 0m1.292s 00:06:41.759 user 0m1.193s 00:06:41.759 sys 0m0.111s 00:06:41.759 20:20:34 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.759 20:20:34 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:41.759 ************************************ 00:06:41.759 END TEST accel_crc32c 00:06:41.759 ************************************ 00:06:42.025 20:20:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:42.025 20:20:34 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:42.025 20:20:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:42.025 20:20:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:42.025 20:20:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:42.025 ************************************ 00:06:42.025 START TEST accel_crc32c_C2 00:06:42.025 ************************************ 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:42.025 [2024-07-15 20:20:34.209152] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:42.025 [2024-07-15 20:20:34.209239] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122050 ] 00:06:42.025 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.025 [2024-07-15 20:20:34.276873] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.025 [2024-07-15 20:20:34.342324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.025 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:42.026 20:20:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:43.407 00:06:43.407 real 0m1.290s 00:06:43.407 user 0m1.198s 00:06:43.407 sys 0m0.103s 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:43.407 20:20:35 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:43.407 ************************************ 00:06:43.407 END TEST accel_crc32c_C2 00:06:43.407 ************************************ 00:06:43.407 20:20:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:43.407 20:20:35 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:43.407 20:20:35 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:43.407 20:20:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.407 20:20:35 accel -- common/autotest_common.sh@10 -- # set +x 00:06:43.407 ************************************ 00:06:43.407 START TEST accel_copy 00:06:43.407 ************************************ 00:06:43.407 20:20:35 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:43.407 [2024-07-15 20:20:35.573706] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:43.407 [2024-07-15 20:20:35.573791] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122397 ] 00:06:43.407 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.407 [2024-07-15 20:20:35.640763] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.407 [2024-07-15 20:20:35.704192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.407 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:43.408 20:20:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:44.792 20:20:36 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:44.793 20:20:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:44.793 20:20:36 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:44.793 00:06:44.793 real 0m1.287s 00:06:44.793 user 0m1.202s 00:06:44.793 sys 0m0.095s 00:06:44.793 20:20:36 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.793 20:20:36 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 ************************************ 00:06:44.793 END TEST accel_copy 00:06:44.793 ************************************ 00:06:44.793 20:20:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:44.793 20:20:36 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.793 20:20:36 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:44.793 20:20:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.793 20:20:36 accel -- common/autotest_common.sh@10 -- # set +x 00:06:44.793 ************************************ 00:06:44.793 START TEST accel_fill 00:06:44.793 ************************************ 00:06:44.793 20:20:36 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:44.793 20:20:36 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:44.793 [2024-07-15 20:20:36.936210] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:44.793 [2024-07-15 20:20:36.936310] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122715 ] 00:06:44.793 EAL: No free 2048 kB hugepages reported on node 1 00:06:44.793 [2024-07-15 20:20:37.005516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:44.793 [2024-07-15 20:20:37.072870] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:44.793 20:20:37 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.208 20:20:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.208 20:20:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.208 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.208 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.208 20:20:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:46.209 20:20:38 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:46.209 00:06:46.209 real 0m1.295s 00:06:46.209 user 0m1.196s 00:06:46.209 sys 0m0.110s 00:06:46.209 20:20:38 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:46.209 20:20:38 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:46.209 ************************************ 00:06:46.209 END TEST accel_fill 00:06:46.209 ************************************ 00:06:46.209 20:20:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:46.209 20:20:38 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:46.209 20:20:38 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:46.209 20:20:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:46.209 20:20:38 accel -- common/autotest_common.sh@10 -- # set +x 00:06:46.209 ************************************ 00:06:46.209 START TEST accel_copy_crc32c 00:06:46.209 ************************************ 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:46.209 [2024-07-15 20:20:38.305270] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:46.209 [2024-07-15 20:20:38.305360] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1122903 ] 00:06:46.209 EAL: No free 2048 kB hugepages reported on node 1 00:06:46.209 [2024-07-15 20:20:38.376281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:46.209 [2024-07-15 20:20:38.446050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:46.209 20:20:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:47.594 00:06:47.594 real 0m1.298s 00:06:47.594 user 0m1.190s 00:06:47.594 sys 0m0.121s 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.594 20:20:39 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:47.594 ************************************ 00:06:47.594 END TEST accel_copy_crc32c 00:06:47.594 ************************************ 00:06:47.594 20:20:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:47.594 20:20:39 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.594 20:20:39 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:47.594 20:20:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.594 20:20:39 accel -- common/autotest_common.sh@10 -- # set +x 00:06:47.594 ************************************ 00:06:47.594 START TEST accel_copy_crc32c_C2 00:06:47.594 ************************************ 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:47.594 [2024-07-15 20:20:39.677797] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:47.594 [2024-07-15 20:20:39.677860] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123138 ] 00:06:47.594 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.594 [2024-07-15 20:20:39.747972] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:47.594 [2024-07-15 20:20:39.817362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.594 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:47.595 20:20:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.978 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:48.979 00:06:48.979 real 0m1.298s 00:06:48.979 user 0m1.201s 00:06:48.979 sys 0m0.109s 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.979 20:20:40 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:48.979 ************************************ 00:06:48.979 END TEST accel_copy_crc32c_C2 00:06:48.979 ************************************ 00:06:48.979 20:20:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:48.979 20:20:40 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:48.979 20:20:40 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:48.979 20:20:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.979 20:20:40 accel -- common/autotest_common.sh@10 -- # set +x 00:06:48.979 ************************************ 00:06:48.979 START TEST accel_dualcast 00:06:48.979 ************************************ 00:06:48.979 20:20:41 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:48.979 [2024-07-15 20:20:41.048959] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:48.979 [2024-07-15 20:20:41.049035] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123485 ] 00:06:48.979 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.979 [2024-07-15 20:20:41.117566] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:48.979 [2024-07-15 20:20:41.185900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:48.979 20:20:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:50.363 20:20:42 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:50.363 00:06:50.363 real 0m1.294s 00:06:50.363 user 0m1.195s 00:06:50.363 sys 0m0.109s 00:06:50.363 20:20:42 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.363 20:20:42 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:50.363 ************************************ 00:06:50.363 END TEST accel_dualcast 00:06:50.363 ************************************ 00:06:50.363 20:20:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:50.363 20:20:42 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:50.363 20:20:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:50.363 20:20:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.363 20:20:42 accel -- common/autotest_common.sh@10 -- # set +x 00:06:50.363 ************************************ 00:06:50.363 START TEST accel_compare 00:06:50.363 ************************************ 00:06:50.363 20:20:42 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:50.363 [2024-07-15 20:20:42.418281] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:50.363 [2024-07-15 20:20:42.418370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1123840 ] 00:06:50.363 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.363 [2024-07-15 20:20:42.488246] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.363 [2024-07-15 20:20:42.556481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:50.363 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:50.364 20:20:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.304 20:20:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.564 20:20:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:51.565 20:20:43 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:51.565 00:06:51.565 real 0m1.296s 00:06:51.565 user 0m1.200s 00:06:51.565 sys 0m0.106s 00:06:51.565 20:20:43 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.565 20:20:43 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:51.565 ************************************ 00:06:51.565 END TEST accel_compare 00:06:51.565 ************************************ 00:06:51.565 20:20:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:51.565 20:20:43 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:51.565 20:20:43 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:51.565 20:20:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.565 20:20:43 accel -- common/autotest_common.sh@10 -- # set +x 00:06:51.565 ************************************ 00:06:51.565 START TEST accel_xor 00:06:51.565 ************************************ 00:06:51.565 20:20:43 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:51.565 20:20:43 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:51.565 [2024-07-15 20:20:43.788219] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:51.565 [2024-07-15 20:20:43.788290] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124190 ] 00:06:51.565 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.565 [2024-07-15 20:20:43.856764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.565 [2024-07-15 20:20:43.926338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:51.826 20:20:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.768 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:52.769 00:06:52.769 real 0m1.294s 00:06:52.769 user 0m1.205s 00:06:52.769 sys 0m0.102s 00:06:52.769 20:20:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:52.769 20:20:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 ************************************ 00:06:52.769 END TEST accel_xor 00:06:52.769 ************************************ 00:06:52.769 20:20:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:52.769 20:20:45 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:52.769 20:20:45 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:52.769 20:20:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:52.769 20:20:45 accel -- common/autotest_common.sh@10 -- # set +x 00:06:52.769 ************************************ 00:06:52.769 START TEST accel_xor 00:06:52.769 ************************************ 00:06:52.769 20:20:45 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:52.769 20:20:45 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:53.029 [2024-07-15 20:20:45.159403] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:53.030 [2024-07-15 20:20:45.159481] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124388 ] 00:06:53.030 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.030 [2024-07-15 20:20:45.230449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.030 [2024-07-15 20:20:45.301731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:53.030 20:20:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:54.414 20:20:46 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:54.414 00:06:54.414 real 0m1.300s 00:06:54.414 user 0m1.205s 00:06:54.414 sys 0m0.105s 00:06:54.414 20:20:46 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.414 20:20:46 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:54.414 ************************************ 00:06:54.414 END TEST accel_xor 00:06:54.414 ************************************ 00:06:54.414 20:20:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:54.414 20:20:46 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:54.414 20:20:46 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:54.414 20:20:46 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.414 20:20:46 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.414 ************************************ 00:06:54.414 START TEST accel_dif_verify 00:06:54.414 ************************************ 00:06:54.414 20:20:46 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:54.414 [2024-07-15 20:20:46.533215] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:54.414 [2024-07-15 20:20:46.533287] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124590 ] 00:06:54.414 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.414 [2024-07-15 20:20:46.604403] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.414 [2024-07-15 20:20:46.675112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:54.414 20:20:46 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:55.799 20:20:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:55.799 00:06:55.799 real 0m1.299s 00:06:55.799 user 0m1.197s 00:06:55.799 sys 0m0.115s 00:06:55.799 20:20:47 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.799 20:20:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:55.799 ************************************ 00:06:55.799 END TEST accel_dif_verify 00:06:55.799 ************************************ 00:06:55.799 20:20:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.799 20:20:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:55.799 20:20:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:55.799 20:20:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.799 20:20:47 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.799 ************************************ 00:06:55.799 START TEST accel_dif_generate 00:06:55.799 ************************************ 00:06:55.800 20:20:47 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:55.800 20:20:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:55.800 [2024-07-15 20:20:47.906753] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:55.800 [2024-07-15 20:20:47.906831] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1124930 ] 00:06:55.800 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.800 [2024-07-15 20:20:47.976732] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.800 [2024-07-15 20:20:48.047021] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:55.800 20:20:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:57.188 20:20:49 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:57.188 00:06:57.188 real 0m1.298s 00:06:57.188 user 0m1.203s 00:06:57.188 sys 0m0.107s 00:06:57.188 20:20:49 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:57.188 20:20:49 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:57.188 ************************************ 00:06:57.188 END TEST accel_dif_generate 00:06:57.188 ************************************ 00:06:57.188 20:20:49 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:57.188 20:20:49 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:57.188 20:20:49 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:57.188 20:20:49 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:57.188 20:20:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:57.188 ************************************ 00:06:57.188 START TEST accel_dif_generate_copy 00:06:57.189 ************************************ 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:57.189 [2024-07-15 20:20:49.280191] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:57.189 [2024-07-15 20:20:49.280260] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125278 ] 00:06:57.189 EAL: No free 2048 kB hugepages reported on node 1 00:06:57.189 [2024-07-15 20:20:49.347672] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.189 [2024-07-15 20:20:49.413089] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:57.189 20:20:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.574 00:06:58.574 real 0m1.291s 00:06:58.574 user 0m1.189s 00:06:58.574 sys 0m0.113s 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.574 20:20:50 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:58.574 ************************************ 00:06:58.574 END TEST accel_dif_generate_copy 00:06:58.574 ************************************ 00:06:58.574 20:20:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.574 20:20:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:58.574 20:20:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.574 20:20:50 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:58.574 20:20:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.574 20:20:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.574 ************************************ 00:06:58.574 START TEST accel_comp 00:06:58.574 ************************************ 00:06:58.574 20:20:50 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:58.574 [2024-07-15 20:20:50.645032] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:58.574 [2024-07-15 20:20:50.645097] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125631 ] 00:06:58.574 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.574 [2024-07-15 20:20:50.714475] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.574 [2024-07-15 20:20:50.784044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.574 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:58.575 20:20:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:59.961 20:20:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.961 00:06:59.961 real 0m1.299s 00:06:59.961 user 0m1.203s 00:06:59.961 sys 0m0.109s 00:06:59.961 20:20:51 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.961 20:20:51 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:59.961 ************************************ 00:06:59.961 END TEST accel_comp 00:06:59.961 ************************************ 00:06:59.961 20:20:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.961 20:20:51 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.961 20:20:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:59.961 20:20:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.961 20:20:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.961 ************************************ 00:06:59.961 START TEST accel_decomp 00:06:59.961 ************************************ 00:06:59.961 20:20:51 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.961 20:20:51 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:59.961 20:20:51 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:59.961 20:20:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.961 20:20:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:59.962 20:20:51 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:59.962 [2024-07-15 20:20:52.019213] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:06:59.962 [2024-07-15 20:20:52.019417] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1125834 ] 00:06:59.962 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.962 [2024-07-15 20:20:52.089611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.962 [2024-07-15 20:20:52.160809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:59.962 20:20:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:01.343 20:20:53 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.343 00:07:01.343 real 0m1.303s 00:07:01.343 user 0m1.206s 00:07:01.344 sys 0m0.110s 00:07:01.344 20:20:53 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.344 20:20:53 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:01.344 ************************************ 00:07:01.344 END TEST accel_decomp 00:07:01.344 ************************************ 00:07:01.344 20:20:53 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.344 20:20:53 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.344 20:20:53 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:01.344 20:20:53 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.344 20:20:53 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.344 ************************************ 00:07:01.344 START TEST accel_decomp_full 00:07:01.344 ************************************ 00:07:01.344 20:20:53 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:01.344 [2024-07-15 20:20:53.396444] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:01.344 [2024-07-15 20:20:53.396540] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126038 ] 00:07:01.344 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.344 [2024-07-15 20:20:53.465792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.344 [2024-07-15 20:20:53.535637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:01.344 20:20:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.724 20:20:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.724 20:20:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:02.725 20:20:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.725 00:07:02.725 real 0m1.312s 00:07:02.725 user 0m1.215s 00:07:02.725 sys 0m0.108s 00:07:02.725 20:20:54 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.725 20:20:54 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:02.725 ************************************ 00:07:02.725 END TEST accel_decomp_full 00:07:02.725 ************************************ 00:07:02.725 20:20:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.725 20:20:54 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.725 20:20:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:02.725 20:20:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.725 20:20:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.725 ************************************ 00:07:02.725 START TEST accel_decomp_mcore 00:07:02.725 ************************************ 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:02.725 [2024-07-15 20:20:54.781637] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:02.725 [2024-07-15 20:20:54.781697] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126372 ] 00:07:02.725 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.725 [2024-07-15 20:20:54.852483] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.725 [2024-07-15 20:20:54.922182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.725 [2024-07-15 20:20:54.922313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.725 [2024-07-15 20:20:54.922627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:02.725 [2024-07-15 20:20:54.922628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.725 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:02.726 20:20:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:04.111 00:07:04.111 real 0m1.308s 00:07:04.111 user 0m4.438s 00:07:04.111 sys 0m0.116s 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:04.111 20:20:56 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:04.111 ************************************ 00:07:04.111 END TEST accel_decomp_mcore 00:07:04.111 ************************************ 00:07:04.111 20:20:56 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:04.111 20:20:56 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.111 20:20:56 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:04.111 20:20:56 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:04.111 20:20:56 accel -- common/autotest_common.sh@10 -- # set +x 00:07:04.111 ************************************ 00:07:04.111 START TEST accel_decomp_full_mcore 00:07:04.111 ************************************ 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:04.111 [2024-07-15 20:20:56.159428] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:04.111 [2024-07-15 20:20:56.159483] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1126722 ] 00:07:04.111 EAL: No free 2048 kB hugepages reported on node 1 00:07:04.111 [2024-07-15 20:20:56.226115] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:04.111 [2024-07-15 20:20:56.294578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.111 [2024-07-15 20:20:56.294682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:04.111 [2024-07-15 20:20:56.294836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.111 [2024-07-15 20:20:56.294836] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.111 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:04.112 20:20:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.079 00:07:05.079 real 0m1.310s 00:07:05.079 user 0m4.465s 00:07:05.079 sys 0m0.118s 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.079 20:20:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:05.079 ************************************ 00:07:05.079 END TEST accel_decomp_full_mcore 00:07:05.079 ************************************ 00:07:05.341 20:20:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.341 20:20:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.341 20:20:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:05.341 20:20:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.341 20:20:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.341 ************************************ 00:07:05.341 START TEST accel_decomp_mthread 00:07:05.341 ************************************ 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:05.341 [2024-07-15 20:20:57.546041] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:05.341 [2024-07-15 20:20:57.546134] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127074 ] 00:07:05.341 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.341 [2024-07-15 20:20:57.614689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.341 [2024-07-15 20:20:57.680542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.341 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.342 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:05.603 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:05.604 20:20:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.547 00:07:06.547 real 0m1.299s 00:07:06.547 user 0m1.214s 00:07:06.547 sys 0m0.098s 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.547 20:20:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:06.547 ************************************ 00:07:06.547 END TEST accel_decomp_mthread 00:07:06.547 ************************************ 00:07:06.547 20:20:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.547 20:20:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.547 20:20:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:06.547 20:20:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.547 20:20:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.547 ************************************ 00:07:06.547 START TEST accel_decomp_full_mthread 00:07:06.547 ************************************ 00:07:06.547 20:20:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.547 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:06.547 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:06.547 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:06.548 20:20:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:06.548 [2024-07-15 20:20:58.918898] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:06.548 [2024-07-15 20:20:58.918971] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127359 ] 00:07:06.809 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.809 [2024-07-15 20:20:58.988618] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.809 [2024-07-15 20:20:59.058153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.809 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:06.810 20:20:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.198 00:07:08.198 real 0m1.332s 00:07:08.198 user 0m1.223s 00:07:08.198 sys 0m0.121s 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.198 20:21:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 ************************************ 00:07:08.198 END TEST accel_decomp_full_mthread 00:07:08.198 ************************************ 00:07:08.198 20:21:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.198 20:21:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:08.198 20:21:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.198 20:21:00 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:08.198 20:21:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:08.198 20:21:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.198 20:21:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.198 20:21:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.198 20:21:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.198 20:21:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.198 20:21:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.198 20:21:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.198 20:21:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:08.198 20:21:00 accel -- accel/accel.sh@41 -- # jq -r . 00:07:08.198 ************************************ 00:07:08.198 START TEST accel_dif_functional_tests 00:07:08.198 ************************************ 00:07:08.198 20:21:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:08.198 [2024-07-15 20:21:00.348337] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:08.198 [2024-07-15 20:21:00.348390] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127578 ] 00:07:08.198 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.198 [2024-07-15 20:21:00.417925] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:08.198 [2024-07-15 20:21:00.493790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:08.198 [2024-07-15 20:21:00.493907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:08.198 [2024-07-15 20:21:00.493911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.198 00:07:08.198 00:07:08.198 CUnit - A unit testing framework for C - Version 2.1-3 00:07:08.198 http://cunit.sourceforge.net/ 00:07:08.198 00:07:08.198 00:07:08.198 Suite: accel_dif 00:07:08.198 Test: verify: DIF generated, GUARD check ...passed 00:07:08.198 Test: verify: DIF generated, APPTAG check ...passed 00:07:08.198 Test: verify: DIF generated, REFTAG check ...passed 00:07:08.198 Test: verify: DIF not generated, GUARD check ...[2024-07-15 20:21:00.549893] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.198 passed 00:07:08.198 Test: verify: DIF not generated, APPTAG check ...[2024-07-15 20:21:00.549938] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.198 passed 00:07:08.198 Test: verify: DIF not generated, REFTAG check ...[2024-07-15 20:21:00.549960] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.198 passed 00:07:08.198 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:08.198 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-15 20:21:00.550007] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:08.198 passed 00:07:08.198 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:08.198 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:08.198 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:08.198 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-15 20:21:00.550119] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:08.198 passed 00:07:08.198 Test: verify copy: DIF generated, GUARD check ...passed 00:07:08.198 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:08.198 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:08.198 Test: verify copy: DIF not generated, GUARD check ...[2024-07-15 20:21:00.550245] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:08.198 passed 00:07:08.198 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-15 20:21:00.550268] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:08.198 passed 00:07:08.198 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-15 20:21:00.550290] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:08.198 passed 00:07:08.198 Test: generate copy: DIF generated, GUARD check ...passed 00:07:08.198 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:08.198 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:08.198 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:08.198 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:08.198 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:08.198 Test: generate copy: iovecs-len validate ...[2024-07-15 20:21:00.550476] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:08.198 passed 00:07:08.198 Test: generate copy: buffer alignment validate ...passed 00:07:08.198 00:07:08.198 Run Summary: Type Total Ran Passed Failed Inactive 00:07:08.198 suites 1 1 n/a 0 0 00:07:08.198 tests 26 26 26 0 0 00:07:08.198 asserts 115 115 115 0 n/a 00:07:08.198 00:07:08.198 Elapsed time = 0.002 seconds 00:07:08.459 00:07:08.459 real 0m0.368s 00:07:08.459 user 0m0.500s 00:07:08.459 sys 0m0.131s 00:07:08.459 20:21:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.459 20:21:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:08.459 ************************************ 00:07:08.459 END TEST accel_dif_functional_tests 00:07:08.459 ************************************ 00:07:08.459 20:21:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.459 00:07:08.459 real 0m30.242s 00:07:08.459 user 0m33.735s 00:07:08.459 sys 0m4.238s 00:07:08.459 20:21:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.459 20:21:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.459 ************************************ 00:07:08.459 END TEST accel 00:07:08.459 ************************************ 00:07:08.459 20:21:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:08.459 20:21:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:08.459 20:21:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:08.459 20:21:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.459 20:21:00 -- common/autotest_common.sh@10 -- # set +x 00:07:08.459 ************************************ 00:07:08.459 START TEST accel_rpc 00:07:08.459 ************************************ 00:07:08.459 20:21:00 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:08.718 * Looking for test storage... 00:07:08.718 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:08.718 20:21:00 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:08.718 20:21:00 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=1127894 00:07:08.718 20:21:00 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 1127894 00:07:08.718 20:21:00 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:08.718 20:21:00 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 1127894 ']' 00:07:08.718 20:21:00 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.718 20:21:00 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.718 20:21:00 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.718 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.718 20:21:00 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.718 20:21:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:08.718 [2024-07-15 20:21:00.943623] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:08.718 [2024-07-15 20:21:00.943696] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1127894 ] 00:07:08.718 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.718 [2024-07-15 20:21:01.018003] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.718 [2024-07-15 20:21:01.091413] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.656 20:21:01 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.656 20:21:01 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:09.656 20:21:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:09.656 20:21:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:09.656 20:21:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:09.656 20:21:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:09.656 20:21:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:09.656 20:21:01 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:09.656 20:21:01 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.656 20:21:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.656 ************************************ 00:07:09.656 START TEST accel_assign_opcode 00:07:09.656 ************************************ 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.656 [2024-07-15 20:21:01.761382] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.656 [2024-07-15 20:21:01.773402] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.656 software 00:07:09.656 00:07:09.656 real 0m0.217s 00:07:09.656 user 0m0.054s 00:07:09.656 sys 0m0.008s 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.656 20:21:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:09.656 ************************************ 00:07:09.656 END TEST accel_assign_opcode 00:07:09.656 ************************************ 00:07:09.656 20:21:02 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:09.656 20:21:02 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 1127894 00:07:09.656 20:21:02 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 1127894 ']' 00:07:09.656 20:21:02 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 1127894 00:07:09.656 20:21:02 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:09.656 20:21:02 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:09.656 20:21:02 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1127894 00:07:09.916 20:21:02 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:09.916 20:21:02 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:09.916 20:21:02 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1127894' 00:07:09.916 killing process with pid 1127894 00:07:09.916 20:21:02 accel_rpc -- common/autotest_common.sh@967 -- # kill 1127894 00:07:09.916 20:21:02 accel_rpc -- common/autotest_common.sh@972 -- # wait 1127894 00:07:09.916 00:07:09.916 real 0m1.498s 00:07:09.916 user 0m1.586s 00:07:09.916 sys 0m0.416s 00:07:09.916 20:21:02 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.916 20:21:02 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:09.916 ************************************ 00:07:09.916 END TEST accel_rpc 00:07:09.916 ************************************ 00:07:10.177 20:21:02 -- common/autotest_common.sh@1142 -- # return 0 00:07:10.177 20:21:02 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.177 20:21:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:10.177 20:21:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:10.177 20:21:02 -- common/autotest_common.sh@10 -- # set +x 00:07:10.177 ************************************ 00:07:10.177 START TEST app_cmdline 00:07:10.177 ************************************ 00:07:10.177 20:21:02 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:10.177 * Looking for test storage... 00:07:10.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:10.177 20:21:02 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:10.177 20:21:02 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1128364 00:07:10.177 20:21:02 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1128364 00:07:10.177 20:21:02 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:10.177 20:21:02 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 1128364 ']' 00:07:10.177 20:21:02 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:10.177 20:21:02 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:10.177 20:21:02 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:10.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:10.177 20:21:02 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:10.177 20:21:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:10.177 [2024-07-15 20:21:02.517912] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:10.177 [2024-07-15 20:21:02.517986] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1128364 ] 00:07:10.177 EAL: No free 2048 kB hugepages reported on node 1 00:07:10.437 [2024-07-15 20:21:02.588931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.437 [2024-07-15 20:21:02.662997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.008 20:21:03 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:11.008 20:21:03 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:11.008 20:21:03 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:11.268 { 00:07:11.268 "version": "SPDK v24.09-pre git sha1 6c0846996", 00:07:11.268 "fields": { 00:07:11.268 "major": 24, 00:07:11.268 "minor": 9, 00:07:11.268 "patch": 0, 00:07:11.268 "suffix": "-pre", 00:07:11.268 "commit": "6c0846996" 00:07:11.268 } 00:07:11.268 } 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:11.268 request: 00:07:11.268 { 00:07:11.268 "method": "env_dpdk_get_mem_stats", 00:07:11.268 "req_id": 1 00:07:11.268 } 00:07:11.268 Got JSON-RPC error response 00:07:11.268 response: 00:07:11.268 { 00:07:11.268 "code": -32601, 00:07:11.268 "message": "Method not found" 00:07:11.268 } 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:11.268 20:21:03 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1128364 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 1128364 ']' 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 1128364 00:07:11.268 20:21:03 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:11.528 20:21:03 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:11.528 20:21:03 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1128364 00:07:11.528 20:21:03 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:11.528 20:21:03 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:11.528 20:21:03 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1128364' 00:07:11.528 killing process with pid 1128364 00:07:11.528 20:21:03 app_cmdline -- common/autotest_common.sh@967 -- # kill 1128364 00:07:11.528 20:21:03 app_cmdline -- common/autotest_common.sh@972 -- # wait 1128364 00:07:11.788 00:07:11.788 real 0m1.551s 00:07:11.788 user 0m1.847s 00:07:11.788 sys 0m0.399s 00:07:11.788 20:21:03 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.788 20:21:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:11.788 ************************************ 00:07:11.788 END TEST app_cmdline 00:07:11.788 ************************************ 00:07:11.788 20:21:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:11.788 20:21:03 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:11.788 20:21:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.788 20:21:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.788 20:21:03 -- common/autotest_common.sh@10 -- # set +x 00:07:11.788 ************************************ 00:07:11.788 START TEST version 00:07:11.788 ************************************ 00:07:11.788 20:21:03 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:11.788 * Looking for test storage... 00:07:11.788 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:11.788 20:21:04 version -- app/version.sh@17 -- # get_header_version major 00:07:11.788 20:21:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # cut -f2 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.788 20:21:04 version -- app/version.sh@17 -- # major=24 00:07:11.788 20:21:04 version -- app/version.sh@18 -- # get_header_version minor 00:07:11.788 20:21:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # cut -f2 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.788 20:21:04 version -- app/version.sh@18 -- # minor=9 00:07:11.788 20:21:04 version -- app/version.sh@19 -- # get_header_version patch 00:07:11.788 20:21:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # cut -f2 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.788 20:21:04 version -- app/version.sh@19 -- # patch=0 00:07:11.788 20:21:04 version -- app/version.sh@20 -- # get_header_version suffix 00:07:11.788 20:21:04 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # cut -f2 00:07:11.788 20:21:04 version -- app/version.sh@14 -- # tr -d '"' 00:07:11.788 20:21:04 version -- app/version.sh@20 -- # suffix=-pre 00:07:11.788 20:21:04 version -- app/version.sh@22 -- # version=24.9 00:07:11.788 20:21:04 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:11.788 20:21:04 version -- app/version.sh@28 -- # version=24.9rc0 00:07:11.788 20:21:04 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:11.788 20:21:04 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:11.788 20:21:04 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:11.788 20:21:04 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:11.788 00:07:11.788 real 0m0.178s 00:07:11.788 user 0m0.080s 00:07:11.788 sys 0m0.141s 00:07:11.788 20:21:04 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.788 20:21:04 version -- common/autotest_common.sh@10 -- # set +x 00:07:11.788 ************************************ 00:07:11.788 END TEST version 00:07:11.788 ************************************ 00:07:12.049 20:21:04 -- common/autotest_common.sh@1142 -- # return 0 00:07:12.049 20:21:04 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@198 -- # uname -s 00:07:12.049 20:21:04 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:12.049 20:21:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:12.049 20:21:04 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:12.049 20:21:04 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:12.049 20:21:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:12.049 20:21:04 -- common/autotest_common.sh@10 -- # set +x 00:07:12.049 20:21:04 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:12.049 20:21:04 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:12.049 20:21:04 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.049 20:21:04 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:12.049 20:21:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.049 20:21:04 -- common/autotest_common.sh@10 -- # set +x 00:07:12.049 ************************************ 00:07:12.049 START TEST nvmf_tcp 00:07:12.049 ************************************ 00:07:12.049 20:21:04 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:12.049 * Looking for test storage... 00:07:12.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.049 20:21:04 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.049 20:21:04 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.049 20:21:04 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.049 20:21:04 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.049 20:21:04 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.049 20:21:04 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.050 20:21:04 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.050 20:21:04 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:12.050 20:21:04 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:12.050 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:12.050 20:21:04 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.050 20:21:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.311 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:12.311 20:21:04 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:12.311 20:21:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:12.311 20:21:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.311 20:21:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:12.311 ************************************ 00:07:12.311 START TEST nvmf_example 00:07:12.311 ************************************ 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:12.311 * Looking for test storage... 00:07:12.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:12.311 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:12.312 20:21:04 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:20.528 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:20.528 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:20.528 Found net devices under 0000:31:00.0: cvl_0_0 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:20.528 Found net devices under 0000:31:00.1: cvl_0_1 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:20.528 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:20.529 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:20.529 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.496 ms 00:07:20.529 00:07:20.529 --- 10.0.0.2 ping statistics --- 00:07:20.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.529 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:20.529 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:20.529 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.542 ms 00:07:20.529 00:07:20.529 --- 10.0.0.1 ping statistics --- 00:07:20.529 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:20.529 rtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 ms 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:20.529 20:21:12 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1133596 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1133596 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 1133596 ']' 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:20.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:20.789 20:21:12 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:20.789 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:21.738 20:21:13 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:21.738 EAL: No free 2048 kB hugepages reported on node 1 00:07:31.750 Initializing NVMe Controllers 00:07:31.750 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:31.750 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:31.750 Initialization complete. Launching workers. 00:07:31.750 ======================================================== 00:07:31.750 Latency(us) 00:07:31.750 Device Information : IOPS MiB/s Average min max 00:07:31.750 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18613.80 72.71 3440.22 757.54 16409.97 00:07:31.750 ======================================================== 00:07:31.751 Total : 18613.80 72.71 3440.22 757.54 16409.97 00:07:31.751 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:31.751 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:31.751 rmmod nvme_tcp 00:07:32.010 rmmod nvme_fabrics 00:07:32.010 rmmod nvme_keyring 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 1133596 ']' 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 1133596 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 1133596 ']' 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 1133596 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1133596 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1133596' 00:07:32.010 killing process with pid 1133596 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 1133596 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 1133596 00:07:32.010 nvmf threads initialize successfully 00:07:32.010 bdev subsystem init successfully 00:07:32.010 created a nvmf target service 00:07:32.010 create targets's poll groups done 00:07:32.010 all subsystems of target started 00:07:32.010 nvmf target is running 00:07:32.010 all subsystems of target stopped 00:07:32.010 destroy targets's poll groups done 00:07:32.010 destroyed the nvmf target service 00:07:32.010 bdev subsystem finish successfully 00:07:32.010 nvmf threads destroy successfully 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:32.010 20:21:24 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.557 20:21:26 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:34.557 20:21:26 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:34.557 20:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:34.557 20:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.557 00:07:34.557 real 0m22.031s 00:07:34.557 user 0m46.784s 00:07:34.557 sys 0m7.255s 00:07:34.557 20:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:34.557 20:21:26 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:34.557 ************************************ 00:07:34.557 END TEST nvmf_example 00:07:34.557 ************************************ 00:07:34.557 20:21:26 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:34.557 20:21:26 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.557 20:21:26 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:34.557 20:21:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:34.557 20:21:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:34.557 ************************************ 00:07:34.557 START TEST nvmf_filesystem 00:07:34.557 ************************************ 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:34.557 * Looking for test storage... 00:07:34.557 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:34.557 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:34.558 #define SPDK_CONFIG_H 00:07:34.558 #define SPDK_CONFIG_APPS 1 00:07:34.558 #define SPDK_CONFIG_ARCH native 00:07:34.558 #undef SPDK_CONFIG_ASAN 00:07:34.558 #undef SPDK_CONFIG_AVAHI 00:07:34.558 #undef SPDK_CONFIG_CET 00:07:34.558 #define SPDK_CONFIG_COVERAGE 1 00:07:34.558 #define SPDK_CONFIG_CROSS_PREFIX 00:07:34.558 #undef SPDK_CONFIG_CRYPTO 00:07:34.558 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:34.558 #undef SPDK_CONFIG_CUSTOMOCF 00:07:34.558 #undef SPDK_CONFIG_DAOS 00:07:34.558 #define SPDK_CONFIG_DAOS_DIR 00:07:34.558 #define SPDK_CONFIG_DEBUG 1 00:07:34.558 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:34.558 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:07:34.558 #define SPDK_CONFIG_DPDK_INC_DIR 00:07:34.558 #define SPDK_CONFIG_DPDK_LIB_DIR 00:07:34.558 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:34.558 #undef SPDK_CONFIG_DPDK_UADK 00:07:34.558 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:34.558 #define SPDK_CONFIG_EXAMPLES 1 00:07:34.558 #undef SPDK_CONFIG_FC 00:07:34.558 #define SPDK_CONFIG_FC_PATH 00:07:34.558 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:34.558 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:34.558 #undef SPDK_CONFIG_FUSE 00:07:34.558 #undef SPDK_CONFIG_FUZZER 00:07:34.558 #define SPDK_CONFIG_FUZZER_LIB 00:07:34.558 #undef SPDK_CONFIG_GOLANG 00:07:34.558 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:34.558 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:34.558 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:34.558 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:34.558 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:34.558 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:34.558 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:34.558 #define SPDK_CONFIG_IDXD 1 00:07:34.558 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:34.558 #undef SPDK_CONFIG_IPSEC_MB 00:07:34.558 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:34.558 #define SPDK_CONFIG_ISAL 1 00:07:34.558 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:34.558 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:34.558 #define SPDK_CONFIG_LIBDIR 00:07:34.558 #undef SPDK_CONFIG_LTO 00:07:34.558 #define SPDK_CONFIG_MAX_LCORES 128 00:07:34.558 #define SPDK_CONFIG_NVME_CUSE 1 00:07:34.558 #undef SPDK_CONFIG_OCF 00:07:34.558 #define SPDK_CONFIG_OCF_PATH 00:07:34.558 #define SPDK_CONFIG_OPENSSL_PATH 00:07:34.558 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:34.558 #define SPDK_CONFIG_PGO_DIR 00:07:34.558 #undef SPDK_CONFIG_PGO_USE 00:07:34.558 #define SPDK_CONFIG_PREFIX /usr/local 00:07:34.558 #undef SPDK_CONFIG_RAID5F 00:07:34.558 #undef SPDK_CONFIG_RBD 00:07:34.558 #define SPDK_CONFIG_RDMA 1 00:07:34.558 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:34.558 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:34.558 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:34.558 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:34.558 #define SPDK_CONFIG_SHARED 1 00:07:34.558 #undef SPDK_CONFIG_SMA 00:07:34.558 #define SPDK_CONFIG_TESTS 1 00:07:34.558 #undef SPDK_CONFIG_TSAN 00:07:34.558 #define SPDK_CONFIG_UBLK 1 00:07:34.558 #define SPDK_CONFIG_UBSAN 1 00:07:34.558 #undef SPDK_CONFIG_UNIT_TESTS 00:07:34.558 #undef SPDK_CONFIG_URING 00:07:34.558 #define SPDK_CONFIG_URING_PATH 00:07:34.558 #undef SPDK_CONFIG_URING_ZNS 00:07:34.558 #undef SPDK_CONFIG_USDT 00:07:34.558 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:34.558 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:34.558 #define SPDK_CONFIG_VFIO_USER 1 00:07:34.558 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:34.558 #define SPDK_CONFIG_VHOST 1 00:07:34.558 #define SPDK_CONFIG_VIRTIO 1 00:07:34.558 #undef SPDK_CONFIG_VTUNE 00:07:34.558 #define SPDK_CONFIG_VTUNE_DIR 00:07:34.558 #define SPDK_CONFIG_WERROR 1 00:07:34.558 #define SPDK_CONFIG_WPDK_DIR 00:07:34.558 #undef SPDK_CONFIG_XNVME 00:07:34.558 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:34.558 20:21:26 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.559 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j144 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 1136393 ]] 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 1136393 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.ogCgV8 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.ogCgV8/tests/target /tmp/spdk.ogCgV8 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=956157952 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4328271872 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=122775367680 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=129370980352 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=6595612672 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64680779776 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4710400 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=25864253440 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=25874198528 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=9945088 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=efivarfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=efivarfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=179200 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=507904 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=324608 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.560 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=64683683840 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=64685490176 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=1806336 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12937093120 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12937097216 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:34.561 * Looking for test storage... 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=122775367680 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=8810205184 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.561 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:34.561 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:34.562 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:34.562 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:34.562 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:34.562 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:34.562 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:34.562 20:21:26 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:34.562 20:21:26 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:07:42.697 Found 0000:31:00.0 (0x8086 - 0x159b) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:07:42.697 Found 0000:31:00.1 (0x8086 - 0x159b) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:07:42.697 Found net devices under 0000:31:00.0: cvl_0_0 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:07:42.697 Found net devices under 0000:31:00.1: cvl_0_1 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:42.697 20:21:34 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:42.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:42.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:07:42.957 00:07:42.957 --- 10.0.0.2 ping statistics --- 00:07:42.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.957 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:42.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:42.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:07:42.957 00:07:42.957 --- 10.0.0.1 ping statistics --- 00:07:42.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:42.957 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:42.957 20:21:35 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.216 ************************************ 00:07:43.217 START TEST nvmf_filesystem_no_in_capsule 00:07:43.217 ************************************ 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1140704 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1140704 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1140704 ']' 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.217 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:43.217 20:21:35 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:43.217 [2024-07-15 20:21:35.422520] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:43.217 [2024-07-15 20:21:35.422566] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.217 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.217 [2024-07-15 20:21:35.496434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:43.217 [2024-07-15 20:21:35.563873] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.217 [2024-07-15 20:21:35.563912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.217 [2024-07-15 20:21:35.563920] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.217 [2024-07-15 20:21:35.563926] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.217 [2024-07-15 20:21:35.563932] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.217 [2024-07-15 20:21:35.564073] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:43.217 [2024-07-15 20:21:35.564184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:43.217 [2024-07-15 20:21:35.564339] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:43.217 [2024-07-15 20:21:35.564524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 [2024-07-15 20:21:36.241911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 Malloc1 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 [2024-07-15 20:21:36.377536] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:44.159 { 00:07:44.159 "name": "Malloc1", 00:07:44.159 "aliases": [ 00:07:44.159 "7915b012-72fa-4319-91f4-bed2e833339f" 00:07:44.159 ], 00:07:44.159 "product_name": "Malloc disk", 00:07:44.159 "block_size": 512, 00:07:44.159 "num_blocks": 1048576, 00:07:44.159 "uuid": "7915b012-72fa-4319-91f4-bed2e833339f", 00:07:44.159 "assigned_rate_limits": { 00:07:44.159 "rw_ios_per_sec": 0, 00:07:44.159 "rw_mbytes_per_sec": 0, 00:07:44.159 "r_mbytes_per_sec": 0, 00:07:44.159 "w_mbytes_per_sec": 0 00:07:44.159 }, 00:07:44.159 "claimed": true, 00:07:44.159 "claim_type": "exclusive_write", 00:07:44.159 "zoned": false, 00:07:44.159 "supported_io_types": { 00:07:44.159 "read": true, 00:07:44.159 "write": true, 00:07:44.159 "unmap": true, 00:07:44.159 "flush": true, 00:07:44.159 "reset": true, 00:07:44.159 "nvme_admin": false, 00:07:44.159 "nvme_io": false, 00:07:44.159 "nvme_io_md": false, 00:07:44.159 "write_zeroes": true, 00:07:44.159 "zcopy": true, 00:07:44.159 "get_zone_info": false, 00:07:44.159 "zone_management": false, 00:07:44.159 "zone_append": false, 00:07:44.159 "compare": false, 00:07:44.159 "compare_and_write": false, 00:07:44.159 "abort": true, 00:07:44.159 "seek_hole": false, 00:07:44.159 "seek_data": false, 00:07:44.159 "copy": true, 00:07:44.159 "nvme_iov_md": false 00:07:44.159 }, 00:07:44.159 "memory_domains": [ 00:07:44.159 { 00:07:44.159 "dma_device_id": "system", 00:07:44.159 "dma_device_type": 1 00:07:44.159 }, 00:07:44.159 { 00:07:44.159 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:44.159 "dma_device_type": 2 00:07:44.159 } 00:07:44.159 ], 00:07:44.159 "driver_specific": {} 00:07:44.159 } 00:07:44.159 ]' 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:44.159 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:44.160 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:44.160 20:21:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:46.072 20:21:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:46.072 20:21:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:46.072 20:21:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:46.072 20:21:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:46.072 20:21:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:47.988 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:48.560 20:21:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:49.503 ************************************ 00:07:49.503 START TEST filesystem_ext4 00:07:49.503 ************************************ 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:49.503 20:21:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:49.503 mke2fs 1.46.5 (30-Dec-2021) 00:07:49.503 Discarding device blocks: 0/522240 done 00:07:49.503 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:49.503 Filesystem UUID: 3398c2b6-ed04-4ac2-a43d-1b812bcdeb16 00:07:49.503 Superblock backups stored on blocks: 00:07:49.503 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:49.503 00:07:49.503 Allocating group tables: 0/64 done 00:07:49.503 Writing inode tables: 0/64 done 00:07:52.803 Creating journal (8192 blocks): done 00:07:53.063 Writing superblocks and filesystem accounting information: 0/64 8/64 done 00:07:53.063 00:07:53.063 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:53.063 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1140704 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:53.326 00:07:53.326 real 0m4.015s 00:07:53.326 user 0m0.029s 00:07:53.326 sys 0m0.049s 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:53.326 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:53.326 ************************************ 00:07:53.326 END TEST filesystem_ext4 00:07:53.326 ************************************ 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:53.588 ************************************ 00:07:53.588 START TEST filesystem_btrfs 00:07:53.588 ************************************ 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:53.588 20:21:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:53.849 btrfs-progs v6.6.2 00:07:53.850 See https://btrfs.readthedocs.io for more information. 00:07:53.850 00:07:53.850 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:53.850 NOTE: several default settings have changed in version 5.15, please make sure 00:07:53.850 this does not affect your deployments: 00:07:53.850 - DUP for metadata (-m dup) 00:07:53.850 - enabled no-holes (-O no-holes) 00:07:53.850 - enabled free-space-tree (-R free-space-tree) 00:07:53.850 00:07:53.850 Label: (null) 00:07:53.850 UUID: 5a0dc13d-ff5e-4e32-8acd-664ca982b19e 00:07:53.850 Node size: 16384 00:07:53.850 Sector size: 4096 00:07:53.850 Filesystem size: 510.00MiB 00:07:53.850 Block group profiles: 00:07:53.850 Data: single 8.00MiB 00:07:53.850 Metadata: DUP 32.00MiB 00:07:53.850 System: DUP 8.00MiB 00:07:53.850 SSD detected: yes 00:07:53.850 Zoned device: no 00:07:53.850 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:53.850 Runtime features: free-space-tree 00:07:53.850 Checksum: crc32c 00:07:53.850 Number of devices: 1 00:07:53.850 Devices: 00:07:53.850 ID SIZE PATH 00:07:53.850 1 510.00MiB /dev/nvme0n1p1 00:07:53.850 00:07:53.850 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:53.850 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:54.112 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:54.112 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1140704 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:54.373 00:07:54.373 real 0m0.769s 00:07:54.373 user 0m0.023s 00:07:54.373 sys 0m0.068s 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:54.373 ************************************ 00:07:54.373 END TEST filesystem_btrfs 00:07:54.373 ************************************ 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:54.373 ************************************ 00:07:54.373 START TEST filesystem_xfs 00:07:54.373 ************************************ 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:54.373 20:21:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:54.373 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:54.373 = sectsz=512 attr=2, projid32bit=1 00:07:54.373 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:54.373 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:54.373 data = bsize=4096 blocks=130560, imaxpct=25 00:07:54.373 = sunit=0 swidth=0 blks 00:07:54.373 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:54.373 log =internal log bsize=4096 blocks=16384, version=2 00:07:54.373 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:54.373 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:55.312 Discarding blocks...Done. 00:07:55.312 20:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:55.312 20:21:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:57.856 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:57.856 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:07:57.856 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:57.856 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:07:57.856 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:07:57.856 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1140704 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.116 00:07:58.116 real 0m3.640s 00:07:58.116 user 0m0.020s 00:07:58.116 sys 0m0.063s 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.116 ************************************ 00:07:58.116 END TEST filesystem_xfs 00:07:58.116 ************************************ 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:58.116 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:07:58.375 20:21:50 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:07:58.635 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:58.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1140704 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1140704 ']' 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1140704 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1140704 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1140704' 00:07:58.896 killing process with pid 1140704 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 1140704 00:07:58.896 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 1140704 00:07:59.156 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:07:59.156 00:07:59.156 real 0m16.023s 00:07:59.156 user 1m3.245s 00:07:59.156 sys 0m1.111s 00:07:59.156 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.156 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.156 ************************************ 00:07:59.156 END TEST nvmf_filesystem_no_in_capsule 00:07:59.156 ************************************ 00:07:59.156 20:21:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:07:59.156 20:21:51 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:59.157 ************************************ 00:07:59.157 START TEST nvmf_filesystem_in_capsule 00:07:59.157 ************************************ 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=1143965 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 1143965 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 1143965 ']' 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:59.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.157 20:21:51 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.157 [2024-07-15 20:21:51.525732] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:07:59.157 [2024-07-15 20:21:51.525777] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:59.418 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.418 [2024-07-15 20:21:51.599846] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:59.418 [2024-07-15 20:21:51.664553] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:59.418 [2024-07-15 20:21:51.664593] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:59.418 [2024-07-15 20:21:51.664601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:59.418 [2024-07-15 20:21:51.664607] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:59.418 [2024-07-15 20:21:51.664612] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:59.418 [2024-07-15 20:21:51.664752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:59.418 [2024-07-15 20:21:51.664863] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:59.418 [2024-07-15 20:21:51.665016] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.418 [2024-07-15 20:21:51.665017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:59.990 [2024-07-15 20:21:52.340948] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:59.990 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.251 Malloc1 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.251 [2024-07-15 20:21:52.465578] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:00.251 { 00:08:00.251 "name": "Malloc1", 00:08:00.251 "aliases": [ 00:08:00.251 "e65dd22e-56b9-4821-9b40-bfd20fd55c29" 00:08:00.251 ], 00:08:00.251 "product_name": "Malloc disk", 00:08:00.251 "block_size": 512, 00:08:00.251 "num_blocks": 1048576, 00:08:00.251 "uuid": "e65dd22e-56b9-4821-9b40-bfd20fd55c29", 00:08:00.251 "assigned_rate_limits": { 00:08:00.251 "rw_ios_per_sec": 0, 00:08:00.251 "rw_mbytes_per_sec": 0, 00:08:00.251 "r_mbytes_per_sec": 0, 00:08:00.251 "w_mbytes_per_sec": 0 00:08:00.251 }, 00:08:00.251 "claimed": true, 00:08:00.251 "claim_type": "exclusive_write", 00:08:00.251 "zoned": false, 00:08:00.251 "supported_io_types": { 00:08:00.251 "read": true, 00:08:00.251 "write": true, 00:08:00.251 "unmap": true, 00:08:00.251 "flush": true, 00:08:00.251 "reset": true, 00:08:00.251 "nvme_admin": false, 00:08:00.251 "nvme_io": false, 00:08:00.251 "nvme_io_md": false, 00:08:00.251 "write_zeroes": true, 00:08:00.251 "zcopy": true, 00:08:00.251 "get_zone_info": false, 00:08:00.251 "zone_management": false, 00:08:00.251 "zone_append": false, 00:08:00.251 "compare": false, 00:08:00.251 "compare_and_write": false, 00:08:00.251 "abort": true, 00:08:00.251 "seek_hole": false, 00:08:00.251 "seek_data": false, 00:08:00.251 "copy": true, 00:08:00.251 "nvme_iov_md": false 00:08:00.251 }, 00:08:00.251 "memory_domains": [ 00:08:00.251 { 00:08:00.251 "dma_device_id": "system", 00:08:00.251 "dma_device_type": 1 00:08:00.251 }, 00:08:00.251 { 00:08:00.251 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:00.251 "dma_device_type": 2 00:08:00.251 } 00:08:00.251 ], 00:08:00.251 "driver_specific": {} 00:08:00.251 } 00:08:00.251 ]' 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:00.251 20:21:52 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:02.185 20:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:02.185 20:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:02.185 20:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:02.185 20:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:02.185 20:21:54 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:04.144 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:04.404 20:21:56 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:05.345 ************************************ 00:08:05.345 START TEST filesystem_in_capsule_ext4 00:08:05.345 ************************************ 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:05.345 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:05.346 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:05.346 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:05.346 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:05.346 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:05.346 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:05.346 20:21:57 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:05.346 mke2fs 1.46.5 (30-Dec-2021) 00:08:05.346 Discarding device blocks: 0/522240 done 00:08:05.346 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:05.346 Filesystem UUID: 76833951-8436-4dd6-8ec6-71643341ad24 00:08:05.346 Superblock backups stored on blocks: 00:08:05.346 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:05.346 00:08:05.346 Allocating group tables: 0/64 done 00:08:05.346 Writing inode tables: 0/64 done 00:08:05.605 Creating journal (8192 blocks): done 00:08:06.696 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:08:06.696 00:08:06.696 20:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:06.696 20:21:58 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:07.267 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1143965 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:07.528 00:08:07.528 real 0m2.130s 00:08:07.528 user 0m0.023s 00:08:07.528 sys 0m0.055s 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:07.528 ************************************ 00:08:07.528 END TEST filesystem_in_capsule_ext4 00:08:07.528 ************************************ 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:07.528 ************************************ 00:08:07.528 START TEST filesystem_in_capsule_btrfs 00:08:07.528 ************************************ 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:07.528 20:21:59 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:07.789 btrfs-progs v6.6.2 00:08:07.789 See https://btrfs.readthedocs.io for more information. 00:08:07.789 00:08:07.789 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:07.789 NOTE: several default settings have changed in version 5.15, please make sure 00:08:07.789 this does not affect your deployments: 00:08:07.789 - DUP for metadata (-m dup) 00:08:07.789 - enabled no-holes (-O no-holes) 00:08:07.789 - enabled free-space-tree (-R free-space-tree) 00:08:07.789 00:08:07.789 Label: (null) 00:08:07.789 UUID: 18afe072-f9b9-4482-8b6f-d272c0d8eae1 00:08:07.789 Node size: 16384 00:08:07.789 Sector size: 4096 00:08:07.789 Filesystem size: 510.00MiB 00:08:07.789 Block group profiles: 00:08:07.789 Data: single 8.00MiB 00:08:07.789 Metadata: DUP 32.00MiB 00:08:07.789 System: DUP 8.00MiB 00:08:07.789 SSD detected: yes 00:08:07.789 Zoned device: no 00:08:07.789 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:07.789 Runtime features: free-space-tree 00:08:07.789 Checksum: crc32c 00:08:07.789 Number of devices: 1 00:08:07.789 Devices: 00:08:07.789 ID SIZE PATH 00:08:07.789 1 510.00MiB /dev/nvme0n1p1 00:08:07.789 00:08:07.789 20:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:07.789 20:22:00 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1143965 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:08.731 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:08.991 00:08:08.991 real 0m1.340s 00:08:08.991 user 0m0.025s 00:08:08.991 sys 0m0.063s 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:08.991 ************************************ 00:08:08.991 END TEST filesystem_in_capsule_btrfs 00:08:08.991 ************************************ 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.991 ************************************ 00:08:08.991 START TEST filesystem_in_capsule_xfs 00:08:08.991 ************************************ 00:08:08.991 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:08.992 20:22:01 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:08.992 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:08.992 = sectsz=512 attr=2, projid32bit=1 00:08:08.992 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:08.992 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:08.992 data = bsize=4096 blocks=130560, imaxpct=25 00:08:08.992 = sunit=0 swidth=0 blks 00:08:08.992 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:08.992 log =internal log bsize=4096 blocks=16384, version=2 00:08:08.992 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:08.992 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:09.934 Discarding blocks...Done. 00:08:09.934 20:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:09.934 20:22:02 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1143965 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:11.843 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:12.103 00:08:12.103 real 0m3.025s 00:08:12.103 user 0m0.027s 00:08:12.103 sys 0m0.055s 00:08:12.103 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.103 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:12.103 ************************************ 00:08:12.103 END TEST filesystem_in_capsule_xfs 00:08:12.103 ************************************ 00:08:12.103 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:12.103 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:12.363 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1143965 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 1143965 ']' 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 1143965 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:12.363 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1143965 00:08:12.622 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:12.622 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:12.622 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1143965' 00:08:12.622 killing process with pid 1143965 00:08:12.622 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 1143965 00:08:12.622 20:22:04 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 1143965 00:08:12.882 20:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:12.882 00:08:12.883 real 0m13.538s 00:08:12.883 user 0m53.347s 00:08:12.883 sys 0m1.085s 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:12.883 ************************************ 00:08:12.883 END TEST nvmf_filesystem_in_capsule 00:08:12.883 ************************************ 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.883 rmmod nvme_tcp 00:08:12.883 rmmod nvme_fabrics 00:08:12.883 rmmod nvme_keyring 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:12.883 20:22:05 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.430 20:22:07 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:15.430 00:08:15.430 real 0m40.611s 00:08:15.430 user 1m59.081s 00:08:15.430 sys 0m8.672s 00:08:15.430 20:22:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:15.430 20:22:07 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:15.430 ************************************ 00:08:15.430 END TEST nvmf_filesystem 00:08:15.430 ************************************ 00:08:15.430 20:22:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:15.431 20:22:07 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.431 20:22:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:15.431 20:22:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:15.431 20:22:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:15.431 ************************************ 00:08:15.431 START TEST nvmf_target_discovery 00:08:15.431 ************************************ 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:15.431 * Looking for test storage... 00:08:15.431 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:15.431 20:22:07 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:23.568 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:23.568 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.568 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:23.569 Found net devices under 0000:31:00.0: cvl_0_0 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:23.569 Found net devices under 0000:31:00.1: cvl_0_1 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.569 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.569 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.636 ms 00:08:23.569 00:08:23.569 --- 10.0.0.2 ping statistics --- 00:08:23.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.569 rtt min/avg/max/mdev = 0.636/0.636/0.636/0.000 ms 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.569 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.569 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:08:23.569 00:08:23.569 --- 10.0.0.1 ping statistics --- 00:08:23.569 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.569 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=1151559 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 1151559 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 1151559 ']' 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.569 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.569 20:22:15 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:23.569 [2024-07-15 20:22:15.634011] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:23.569 [2024-07-15 20:22:15.634073] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.569 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.569 [2024-07-15 20:22:15.714667] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:23.569 [2024-07-15 20:22:15.789060] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:23.569 [2024-07-15 20:22:15.789100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:23.569 [2024-07-15 20:22:15.789108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:23.569 [2024-07-15 20:22:15.789115] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:23.569 [2024-07-15 20:22:15.789120] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:23.569 [2024-07-15 20:22:15.789275] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:23.569 [2024-07-15 20:22:15.789348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:23.569 [2024-07-15 20:22:15.789505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.569 [2024-07-15 20:22:15.789505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.140 [2024-07-15 20:22:16.464859] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.140 Null1 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.140 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 [2024-07-15 20:22:16.521120] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 Null2 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 Null3 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:24.400 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.401 Null4 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.401 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:08:24.661 00:08:24.661 Discovery Log Number of Records 6, Generation counter 6 00:08:24.661 =====Discovery Log Entry 0====== 00:08:24.661 trtype: tcp 00:08:24.661 adrfam: ipv4 00:08:24.661 subtype: current discovery subsystem 00:08:24.661 treq: not required 00:08:24.661 portid: 0 00:08:24.661 trsvcid: 4420 00:08:24.661 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.661 traddr: 10.0.0.2 00:08:24.661 eflags: explicit discovery connections, duplicate discovery information 00:08:24.661 sectype: none 00:08:24.661 =====Discovery Log Entry 1====== 00:08:24.661 trtype: tcp 00:08:24.661 adrfam: ipv4 00:08:24.661 subtype: nvme subsystem 00:08:24.661 treq: not required 00:08:24.661 portid: 0 00:08:24.661 trsvcid: 4420 00:08:24.661 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:24.661 traddr: 10.0.0.2 00:08:24.661 eflags: none 00:08:24.661 sectype: none 00:08:24.661 =====Discovery Log Entry 2====== 00:08:24.661 trtype: tcp 00:08:24.661 adrfam: ipv4 00:08:24.661 subtype: nvme subsystem 00:08:24.661 treq: not required 00:08:24.661 portid: 0 00:08:24.661 trsvcid: 4420 00:08:24.661 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:24.661 traddr: 10.0.0.2 00:08:24.661 eflags: none 00:08:24.661 sectype: none 00:08:24.661 =====Discovery Log Entry 3====== 00:08:24.661 trtype: tcp 00:08:24.661 adrfam: ipv4 00:08:24.661 subtype: nvme subsystem 00:08:24.661 treq: not required 00:08:24.661 portid: 0 00:08:24.661 trsvcid: 4420 00:08:24.661 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:24.661 traddr: 10.0.0.2 00:08:24.661 eflags: none 00:08:24.661 sectype: none 00:08:24.662 =====Discovery Log Entry 4====== 00:08:24.662 trtype: tcp 00:08:24.662 adrfam: ipv4 00:08:24.662 subtype: nvme subsystem 00:08:24.662 treq: not required 00:08:24.662 portid: 0 00:08:24.662 trsvcid: 4420 00:08:24.662 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:24.662 traddr: 10.0.0.2 00:08:24.662 eflags: none 00:08:24.662 sectype: none 00:08:24.662 =====Discovery Log Entry 5====== 00:08:24.662 trtype: tcp 00:08:24.662 adrfam: ipv4 00:08:24.662 subtype: discovery subsystem referral 00:08:24.662 treq: not required 00:08:24.662 portid: 0 00:08:24.662 trsvcid: 4430 00:08:24.662 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:24.662 traddr: 10.0.0.2 00:08:24.662 eflags: none 00:08:24.662 sectype: none 00:08:24.662 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:24.662 Perform nvmf subsystem discovery via RPC 00:08:24.662 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:24.662 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.662 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.662 [ 00:08:24.662 { 00:08:24.662 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:24.662 "subtype": "Discovery", 00:08:24.662 "listen_addresses": [ 00:08:24.662 { 00:08:24.662 "trtype": "TCP", 00:08:24.662 "adrfam": "IPv4", 00:08:24.662 "traddr": "10.0.0.2", 00:08:24.662 "trsvcid": "4420" 00:08:24.662 } 00:08:24.662 ], 00:08:24.662 "allow_any_host": true, 00:08:24.662 "hosts": [] 00:08:24.662 }, 00:08:24.662 { 00:08:24.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:24.662 "subtype": "NVMe", 00:08:24.662 "listen_addresses": [ 00:08:24.662 { 00:08:24.662 "trtype": "TCP", 00:08:24.662 "adrfam": "IPv4", 00:08:24.662 "traddr": "10.0.0.2", 00:08:24.662 "trsvcid": "4420" 00:08:24.662 } 00:08:24.662 ], 00:08:24.662 "allow_any_host": true, 00:08:24.662 "hosts": [], 00:08:24.662 "serial_number": "SPDK00000000000001", 00:08:24.662 "model_number": "SPDK bdev Controller", 00:08:24.662 "max_namespaces": 32, 00:08:24.662 "min_cntlid": 1, 00:08:24.662 "max_cntlid": 65519, 00:08:24.662 "namespaces": [ 00:08:24.662 { 00:08:24.662 "nsid": 1, 00:08:24.662 "bdev_name": "Null1", 00:08:24.662 "name": "Null1", 00:08:24.662 "nguid": "B9F3E71A35BD435E87C8E2486B5A77A7", 00:08:24.662 "uuid": "b9f3e71a-35bd-435e-87c8-e2486b5a77a7" 00:08:24.662 } 00:08:24.662 ] 00:08:24.662 }, 00:08:24.662 { 00:08:24.662 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:24.662 "subtype": "NVMe", 00:08:24.662 "listen_addresses": [ 00:08:24.662 { 00:08:24.662 "trtype": "TCP", 00:08:24.662 "adrfam": "IPv4", 00:08:24.662 "traddr": "10.0.0.2", 00:08:24.662 "trsvcid": "4420" 00:08:24.662 } 00:08:24.662 ], 00:08:24.662 "allow_any_host": true, 00:08:24.662 "hosts": [], 00:08:24.662 "serial_number": "SPDK00000000000002", 00:08:24.662 "model_number": "SPDK bdev Controller", 00:08:24.662 "max_namespaces": 32, 00:08:24.662 "min_cntlid": 1, 00:08:24.662 "max_cntlid": 65519, 00:08:24.662 "namespaces": [ 00:08:24.662 { 00:08:24.662 "nsid": 1, 00:08:24.662 "bdev_name": "Null2", 00:08:24.662 "name": "Null2", 00:08:24.662 "nguid": "00D2D98C08114857A132439AB3899582", 00:08:24.662 "uuid": "00d2d98c-0811-4857-a132-439ab3899582" 00:08:24.662 } 00:08:24.662 ] 00:08:24.662 }, 00:08:24.662 { 00:08:24.662 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:24.662 "subtype": "NVMe", 00:08:24.662 "listen_addresses": [ 00:08:24.662 { 00:08:24.662 "trtype": "TCP", 00:08:24.662 "adrfam": "IPv4", 00:08:24.662 "traddr": "10.0.0.2", 00:08:24.662 "trsvcid": "4420" 00:08:24.662 } 00:08:24.662 ], 00:08:24.662 "allow_any_host": true, 00:08:24.662 "hosts": [], 00:08:24.662 "serial_number": "SPDK00000000000003", 00:08:24.662 "model_number": "SPDK bdev Controller", 00:08:24.662 "max_namespaces": 32, 00:08:24.662 "min_cntlid": 1, 00:08:24.662 "max_cntlid": 65519, 00:08:24.662 "namespaces": [ 00:08:24.662 { 00:08:24.662 "nsid": 1, 00:08:24.662 "bdev_name": "Null3", 00:08:24.662 "name": "Null3", 00:08:24.662 "nguid": "1ECE90551D7F40239B01D36B436544E1", 00:08:24.662 "uuid": "1ece9055-1d7f-4023-9b01-d36b436544e1" 00:08:24.662 } 00:08:24.662 ] 00:08:24.662 }, 00:08:24.662 { 00:08:24.662 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:24.662 "subtype": "NVMe", 00:08:24.662 "listen_addresses": [ 00:08:24.662 { 00:08:24.662 "trtype": "TCP", 00:08:24.662 "adrfam": "IPv4", 00:08:24.662 "traddr": "10.0.0.2", 00:08:24.662 "trsvcid": "4420" 00:08:24.662 } 00:08:24.662 ], 00:08:24.662 "allow_any_host": true, 00:08:24.662 "hosts": [], 00:08:24.662 "serial_number": "SPDK00000000000004", 00:08:24.662 "model_number": "SPDK bdev Controller", 00:08:24.662 "max_namespaces": 32, 00:08:24.662 "min_cntlid": 1, 00:08:24.662 "max_cntlid": 65519, 00:08:24.662 "namespaces": [ 00:08:24.662 { 00:08:24.662 "nsid": 1, 00:08:24.662 "bdev_name": "Null4", 00:08:24.662 "name": "Null4", 00:08:24.662 "nguid": "C9FAB8CBF78B4EA3AF8582DA06416801", 00:08:24.662 "uuid": "c9fab8cb-f78b-4ea3-af85-82da06416801" 00:08:24.662 } 00:08:24.662 ] 00:08:24.662 } 00:08:24.662 ] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:24.663 20:22:16 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:24.663 rmmod nvme_tcp 00:08:24.663 rmmod nvme_fabrics 00:08:24.663 rmmod nvme_keyring 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 1151559 ']' 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 1151559 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 1151559 ']' 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 1151559 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:24.663 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1151559 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1151559' 00:08:24.923 killing process with pid 1151559 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 1151559 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 1151559 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:24.923 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:24.924 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:24.924 20:22:17 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:24.924 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:24.924 20:22:17 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.467 20:22:19 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:27.467 00:08:27.467 real 0m12.020s 00:08:27.467 user 0m8.088s 00:08:27.467 sys 0m6.494s 00:08:27.467 20:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:27.467 20:22:19 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:27.467 ************************************ 00:08:27.467 END TEST nvmf_target_discovery 00:08:27.467 ************************************ 00:08:27.467 20:22:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:27.467 20:22:19 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:27.467 20:22:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:27.467 20:22:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:27.467 20:22:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:27.467 ************************************ 00:08:27.467 START TEST nvmf_referrals 00:08:27.467 ************************************ 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:27.467 * Looking for test storage... 00:08:27.467 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:27.467 20:22:19 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:35.616 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:35.617 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:35.617 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:35.617 Found net devices under 0000:31:00.0: cvl_0_0 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:35.617 Found net devices under 0000:31:00.1: cvl_0_1 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:35.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:35.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.551 ms 00:08:35.617 00:08:35.617 --- 10.0.0.2 ping statistics --- 00:08:35.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.617 rtt min/avg/max/mdev = 0.551/0.551/0.551/0.000 ms 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:35.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:35.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:08:35.617 00:08:35.617 --- 10.0.0.1 ping statistics --- 00:08:35.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:35.617 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=1156607 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 1156607 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 1156607 ']' 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:35.617 20:22:27 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:35.617 [2024-07-15 20:22:27.833539] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:35.617 [2024-07-15 20:22:27.833588] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.617 EAL: No free 2048 kB hugepages reported on node 1 00:08:35.617 [2024-07-15 20:22:27.907336] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:35.617 [2024-07-15 20:22:27.972452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:35.617 [2024-07-15 20:22:27.972484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:35.617 [2024-07-15 20:22:27.972491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:35.617 [2024-07-15 20:22:27.972498] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:35.617 [2024-07-15 20:22:27.972503] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:35.617 [2024-07-15 20:22:27.972676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:35.617 [2024-07-15 20:22:27.972791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:35.617 [2024-07-15 20:22:27.972948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.617 [2024-07-15 20:22:27.972949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.559 [2024-07-15 20:22:28.650901] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.559 [2024-07-15 20:22:28.667081] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.559 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.560 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.821 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:36.821 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:36.821 20:22:28 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:36.821 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.821 20:22:28 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:36.821 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.082 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.344 20:22:29 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.605 20:22:29 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:37.865 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:37.866 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:37.866 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:37.866 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:37.866 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:37.866 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:37.866 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:37.866 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:38.127 rmmod nvme_tcp 00:08:38.127 rmmod nvme_fabrics 00:08:38.127 rmmod nvme_keyring 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 1156607 ']' 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 1156607 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 1156607 ']' 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 1156607 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1156607 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1156607' 00:08:38.127 killing process with pid 1156607 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 1156607 00:08:38.127 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 1156607 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:38.388 20:22:30 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.303 20:22:32 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:40.303 00:08:40.303 real 0m13.219s 00:08:40.303 user 0m13.402s 00:08:40.303 sys 0m6.634s 00:08:40.303 20:22:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:40.303 20:22:32 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:40.303 ************************************ 00:08:40.303 END TEST nvmf_referrals 00:08:40.303 ************************************ 00:08:40.303 20:22:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:40.303 20:22:32 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.303 20:22:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:40.303 20:22:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:40.303 20:22:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:40.303 ************************************ 00:08:40.303 START TEST nvmf_connect_disconnect 00:08:40.303 ************************************ 00:08:40.303 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:40.564 * Looking for test storage... 00:08:40.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:40.564 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:40.565 20:22:32 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:48.802 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:08:48.803 Found 0000:31:00.0 (0x8086 - 0x159b) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:08:48.803 Found 0000:31:00.1 (0x8086 - 0x159b) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:08:48.803 Found net devices under 0000:31:00.0: cvl_0_0 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:08:48.803 Found net devices under 0000:31:00.1: cvl_0_1 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:48.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:48.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.819 ms 00:08:48.803 00:08:48.803 --- 10.0.0.2 ping statistics --- 00:08:48.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.803 rtt min/avg/max/mdev = 0.819/0.819/0.819/0.000 ms 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:48.803 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:48.803 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:08:48.803 00:08:48.803 --- 10.0.0.1 ping statistics --- 00:08:48.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:48.803 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=1162053 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 1162053 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 1162053 ']' 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:48.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:48.803 20:22:40 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:48.803 [2024-07-15 20:22:40.903570] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:08:48.803 [2024-07-15 20:22:40.903623] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:48.803 EAL: No free 2048 kB hugepages reported on node 1 00:08:48.803 [2024-07-15 20:22:40.979765] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:48.803 [2024-07-15 20:22:41.048468] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:48.804 [2024-07-15 20:22:41.048518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:48.804 [2024-07-15 20:22:41.048526] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:48.804 [2024-07-15 20:22:41.048533] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:48.804 [2024-07-15 20:22:41.048539] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:48.804 [2024-07-15 20:22:41.048675] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:48.804 [2024-07-15 20:22:41.048798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:48.804 [2024-07-15 20:22:41.048955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:48.804 [2024-07-15 20:22:41.048957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 [2024-07-15 20:22:41.721906] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.376 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:49.638 [2024-07-15 20:22:41.781318] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:08:49.638 20:22:41 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:53.846 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:57.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:00.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:07.979 rmmod nvme_tcp 00:09:07.979 rmmod nvme_fabrics 00:09:07.979 rmmod nvme_keyring 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 1162053 ']' 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 1162053 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1162053 ']' 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 1162053 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1162053 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1162053' 00:09:07.979 killing process with pid 1162053 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 1162053 00:09:07.979 20:22:59 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 1162053 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:07.979 20:23:00 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:09.894 20:23:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:09.894 00:09:09.894 real 0m29.447s 00:09:09.894 user 1m17.972s 00:09:09.894 sys 0m6.986s 00:09:09.894 20:23:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:09.894 20:23:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:09:09.894 ************************************ 00:09:09.894 END TEST nvmf_connect_disconnect 00:09:09.894 ************************************ 00:09:09.894 20:23:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:09.894 20:23:02 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:09.894 20:23:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:09.894 20:23:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:09.894 20:23:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:09.894 ************************************ 00:09:09.894 START TEST nvmf_multitarget 00:09:09.894 ************************************ 00:09:09.894 20:23:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:09:10.156 * Looking for test storage... 00:09:10.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:10.156 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:09:10.157 20:23:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:18.306 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:18.307 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:18.307 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:18.307 Found net devices under 0000:31:00.0: cvl_0_0 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:18.307 Found net devices under 0000:31:00.1: cvl_0_1 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:18.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.597 ms 00:09:18.307 00:09:18.307 --- 10.0.0.2 ping statistics --- 00:09:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.307 rtt min/avg/max/mdev = 0.597/0.597/0.597/0.000 ms 00:09:18.307 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:18.307 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.307 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.265 ms 00:09:18.307 00:09:18.307 --- 10.0.0.1 ping statistics --- 00:09:18.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.307 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=1170539 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 1170539 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 1170539 ']' 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:18.308 20:23:10 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:18.308 [2024-07-15 20:23:10.613958] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:09:18.308 [2024-07-15 20:23:10.614012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:18.308 EAL: No free 2048 kB hugepages reported on node 1 00:09:18.568 [2024-07-15 20:23:10.693904] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:18.568 [2024-07-15 20:23:10.767145] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:18.568 [2024-07-15 20:23:10.767187] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:18.568 [2024-07-15 20:23:10.767194] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:18.568 [2024-07-15 20:23:10.767201] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:18.568 [2024-07-15 20:23:10.767206] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:18.568 [2024-07-15 20:23:10.767361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:18.568 [2024-07-15 20:23:10.767470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:18.568 [2024-07-15 20:23:10.767627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:18.568 [2024-07-15 20:23:10.767628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:19.139 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:09:19.400 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:09:19.400 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:09:19.400 "nvmf_tgt_1" 00:09:19.400 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:09:19.400 "nvmf_tgt_2" 00:09:19.400 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:19.400 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:09:19.661 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:09:19.661 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:09:19.661 true 00:09:19.661 20:23:11 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:09:19.661 true 00:09:19.661 20:23:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:09:19.661 20:23:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.921 rmmod nvme_tcp 00:09:19.921 rmmod nvme_fabrics 00:09:19.921 rmmod nvme_keyring 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 1170539 ']' 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 1170539 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 1170539 ']' 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 1170539 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1170539 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1170539' 00:09:19.921 killing process with pid 1170539 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 1170539 00:09:19.921 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 1170539 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:20.182 20:23:12 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.093 20:23:14 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.093 00:09:22.093 real 0m12.255s 00:09:22.093 user 0m9.422s 00:09:22.093 sys 0m6.534s 00:09:22.093 20:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:22.093 20:23:14 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:09:22.093 ************************************ 00:09:22.093 END TEST nvmf_multitarget 00:09:22.093 ************************************ 00:09:22.354 20:23:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:09:22.354 20:23:14 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:22.354 20:23:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:22.354 20:23:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:22.355 20:23:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:22.355 ************************************ 00:09:22.355 START TEST nvmf_rpc 00:09:22.355 ************************************ 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:09:22.355 * Looking for test storage... 00:09:22.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.355 20:23:14 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:09:30.522 Found 0000:31:00.0 (0x8086 - 0x159b) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:09:30.522 Found 0000:31:00.1 (0x8086 - 0x159b) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:09:30.522 Found net devices under 0000:31:00.0: cvl_0_0 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:09:30.522 Found net devices under 0000:31:00.1: cvl_0_1 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:30.522 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:30.523 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:30.806 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:30.806 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.530 ms 00:09:30.806 00:09:30.806 --- 10.0.0.2 ping statistics --- 00:09:30.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.806 rtt min/avg/max/mdev = 0.530/0.530/0.530/0.000 ms 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:30.806 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:30.806 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.317 ms 00:09:30.806 00:09:30.806 --- 10.0.0.1 ping statistics --- 00:09:30.806 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:30.806 rtt min/avg/max/mdev = 0.317/0.317/0.317/0.000 ms 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=1175584 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 1175584 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 1175584 ']' 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:30.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:30.806 20:23:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:30.806 [2024-07-15 20:23:22.997006] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:09:30.807 [2024-07-15 20:23:22.997054] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:30.807 EAL: No free 2048 kB hugepages reported on node 1 00:09:30.807 [2024-07-15 20:23:23.073093] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:30.807 [2024-07-15 20:23:23.138944] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:30.807 [2024-07-15 20:23:23.138983] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:30.807 [2024-07-15 20:23:23.138991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:30.807 [2024-07-15 20:23:23.138997] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:30.807 [2024-07-15 20:23:23.139003] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:30.807 [2024-07-15 20:23:23.139141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:30.807 [2024-07-15 20:23:23.139259] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:30.807 [2024-07-15 20:23:23.139362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:30.807 [2024-07-15 20:23:23.139363] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:31.387 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:31.387 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:09:31.387 20:23:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:31.387 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:31.387 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:09:31.646 "tick_rate": 2400000000, 00:09:31.646 "poll_groups": [ 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_000", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [] 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_001", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [] 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_002", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [] 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_003", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [] 00:09:31.646 } 00:09:31.646 ] 00:09:31.646 }' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.646 [2024-07-15 20:23:23.925176] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:09:31.646 "tick_rate": 2400000000, 00:09:31.646 "poll_groups": [ 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_000", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [ 00:09:31.646 { 00:09:31.646 "trtype": "TCP" 00:09:31.646 } 00:09:31.646 ] 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_001", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [ 00:09:31.646 { 00:09:31.646 "trtype": "TCP" 00:09:31.646 } 00:09:31.646 ] 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_002", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [ 00:09:31.646 { 00:09:31.646 "trtype": "TCP" 00:09:31.646 } 00:09:31.646 ] 00:09:31.646 }, 00:09:31.646 { 00:09:31.646 "name": "nvmf_tgt_poll_group_003", 00:09:31.646 "admin_qpairs": 0, 00:09:31.646 "io_qpairs": 0, 00:09:31.646 "current_admin_qpairs": 0, 00:09:31.646 "current_io_qpairs": 0, 00:09:31.646 "pending_bdev_io": 0, 00:09:31.646 "completed_nvme_io": 0, 00:09:31.646 "transports": [ 00:09:31.646 { 00:09:31.646 "trtype": "TCP" 00:09:31.646 } 00:09:31.646 ] 00:09:31.646 } 00:09:31.646 ] 00:09:31.646 }' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:31.646 20:23:23 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:31.646 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:09:31.647 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:09:31.647 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:31.647 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:31.647 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.906 Malloc1 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.906 [2024-07-15 20:23:24.113040] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.2 -s 4420 00:09:31.906 [2024-07-15 20:23:24.139792] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:31.906 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:31.906 could not add new controller: failed to write to nvme-fabrics device 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:31.906 20:23:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:33.288 20:23:25 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:09:33.288 20:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:33.288 20:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:33.288 20:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:33.288 20:23:25 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:35.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:35.830 [2024-07-15 20:23:27.793807] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396' 00:09:35.830 Failed to write to /dev/nvme-fabrics: Input/output error 00:09:35.830 could not add new controller: failed to write to nvme-fabrics device 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:35.830 20:23:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:37.213 20:23:29 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:09:37.213 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:37.213 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:37.213 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:37.213 20:23:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:39.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.125 [2024-07-15 20:23:31.477820] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:39.125 20:23:31 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:41.037 20:23:32 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:41.037 20:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:41.037 20:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:41.037 20:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:41.037 20:23:32 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:42.953 20:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:42.953 20:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:42.953 20:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.953 20:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:42.953 20:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.953 20:23:34 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:42.953 20:23:34 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:42.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.953 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:42.953 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:42.953 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:42.953 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 [2024-07-15 20:23:35.150118] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:42.954 20:23:35 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:44.340 20:23:36 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:44.340 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:44.340 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:44.340 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:44.340 20:23:36 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:46.255 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:46.255 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:46.255 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:46.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.517 [2024-07-15 20:23:38.818080] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:46.517 20:23:38 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.430 20:23:40 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.430 20:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:48.430 20:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.430 20:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:48.430 20:23:40 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:50.342 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:50.342 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:50.342 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:50.342 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:50.342 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:50.342 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:50.343 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.343 [2024-07-15 20:23:42.485367] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:50.343 20:23:42 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:51.726 20:23:43 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:51.726 20:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:51.726 20:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:51.726 20:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:51.726 20:23:43 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:53.639 20:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:53.639 20:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:53.639 20:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:53.639 20:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:53.639 20:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:53.639 20:23:45 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:53.639 20:23:45 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:53.900 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.900 [2024-07-15 20:23:46.162783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.900 20:23:46 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:55.811 20:23:47 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:09:55.811 20:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:09:55.811 20:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:55.811 20:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:55.811 20:23:47 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:57.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 [2024-07-15 20:23:49.873414] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 [2024-07-15 20:23:49.933528] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.723 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 [2024-07-15 20:23:49.993714] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:49 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 [2024-07-15 20:23:50.053913] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.724 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 [2024-07-15 20:23:50.114078] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:09:57.984 "tick_rate": 2400000000, 00:09:57.984 "poll_groups": [ 00:09:57.984 { 00:09:57.984 "name": "nvmf_tgt_poll_group_000", 00:09:57.984 "admin_qpairs": 0, 00:09:57.984 "io_qpairs": 224, 00:09:57.984 "current_admin_qpairs": 0, 00:09:57.984 "current_io_qpairs": 0, 00:09:57.984 "pending_bdev_io": 0, 00:09:57.984 "completed_nvme_io": 225, 00:09:57.984 "transports": [ 00:09:57.984 { 00:09:57.984 "trtype": "TCP" 00:09:57.984 } 00:09:57.984 ] 00:09:57.984 }, 00:09:57.984 { 00:09:57.984 "name": "nvmf_tgt_poll_group_001", 00:09:57.984 "admin_qpairs": 1, 00:09:57.984 "io_qpairs": 223, 00:09:57.984 "current_admin_qpairs": 0, 00:09:57.984 "current_io_qpairs": 0, 00:09:57.984 "pending_bdev_io": 0, 00:09:57.984 "completed_nvme_io": 272, 00:09:57.984 "transports": [ 00:09:57.984 { 00:09:57.984 "trtype": "TCP" 00:09:57.984 } 00:09:57.984 ] 00:09:57.984 }, 00:09:57.984 { 00:09:57.984 "name": "nvmf_tgt_poll_group_002", 00:09:57.984 "admin_qpairs": 6, 00:09:57.984 "io_qpairs": 218, 00:09:57.984 "current_admin_qpairs": 0, 00:09:57.984 "current_io_qpairs": 0, 00:09:57.984 "pending_bdev_io": 0, 00:09:57.984 "completed_nvme_io": 464, 00:09:57.984 "transports": [ 00:09:57.984 { 00:09:57.984 "trtype": "TCP" 00:09:57.984 } 00:09:57.984 ] 00:09:57.984 }, 00:09:57.984 { 00:09:57.984 "name": "nvmf_tgt_poll_group_003", 00:09:57.984 "admin_qpairs": 0, 00:09:57.984 "io_qpairs": 224, 00:09:57.984 "current_admin_qpairs": 0, 00:09:57.984 "current_io_qpairs": 0, 00:09:57.984 "pending_bdev_io": 0, 00:09:57.984 "completed_nvme_io": 278, 00:09:57.984 "transports": [ 00:09:57.984 { 00:09:57.984 "trtype": "TCP" 00:09:57.984 } 00:09:57.984 ] 00:09:57.984 } 00:09:57.984 ] 00:09:57.984 }' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 889 > 0 )) 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:57.984 rmmod nvme_tcp 00:09:57.984 rmmod nvme_fabrics 00:09:57.984 rmmod nvme_keyring 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 1175584 ']' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 1175584 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 1175584 ']' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 1175584 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:57.984 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1175584 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1175584' 00:09:58.244 killing process with pid 1175584 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 1175584 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 1175584 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:09:58.244 20:23:50 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.886 20:23:52 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:00.886 00:10:00.886 real 0m38.066s 00:10:00.886 user 1m51.806s 00:10:00.886 sys 0m7.665s 00:10:00.886 20:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:00.886 20:23:52 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:00.886 ************************************ 00:10:00.886 END TEST nvmf_rpc 00:10:00.886 ************************************ 00:10:00.886 20:23:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:00.886 20:23:52 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:00.886 20:23:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:00.886 20:23:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:00.886 20:23:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:00.886 ************************************ 00:10:00.886 START TEST nvmf_invalid 00:10:00.886 ************************************ 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:10:00.886 * Looking for test storage... 00:10:00.886 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:10:00.886 20:23:52 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:10:09.068 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:09.069 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:09.069 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:09.069 Found net devices under 0000:31:00.0: cvl_0_0 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:09.069 Found net devices under 0000:31:00.1: cvl_0_1 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:09.069 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:09.069 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:10:09.069 00:10:09.069 --- 10.0.0.2 ping statistics --- 00:10:09.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.069 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:09.069 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:09.069 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.334 ms 00:10:09.069 00:10:09.069 --- 10.0.0.1 ping statistics --- 00:10:09.069 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:09.069 rtt min/avg/max/mdev = 0.334/0.334/0.334/0.000 ms 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=1185847 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 1185847 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 1185847 ']' 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:09.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:09.069 20:24:00 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:09.069 [2024-07-15 20:24:01.006815] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:09.069 [2024-07-15 20:24:01.006880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:09.069 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.069 [2024-07-15 20:24:01.090207] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:09.069 [2024-07-15 20:24:01.165174] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:09.069 [2024-07-15 20:24:01.165216] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:09.069 [2024-07-15 20:24:01.165225] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:09.069 [2024-07-15 20:24:01.165236] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:09.069 [2024-07-15 20:24:01.165242] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:09.069 [2024-07-15 20:24:01.165314] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:09.069 [2024-07-15 20:24:01.165450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:09.069 [2024-07-15 20:24:01.165608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:09.070 [2024-07-15 20:24:01.165609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:09.640 20:24:01 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode15453 00:10:09.640 [2024-07-15 20:24:01.977201] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:10:09.640 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:10:09.640 { 00:10:09.640 "nqn": "nqn.2016-06.io.spdk:cnode15453", 00:10:09.640 "tgt_name": "foobar", 00:10:09.640 "method": "nvmf_create_subsystem", 00:10:09.640 "req_id": 1 00:10:09.640 } 00:10:09.640 Got JSON-RPC error response 00:10:09.640 response: 00:10:09.640 { 00:10:09.640 "code": -32603, 00:10:09.640 "message": "Unable to find target foobar" 00:10:09.640 }' 00:10:09.640 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:10:09.640 { 00:10:09.640 "nqn": "nqn.2016-06.io.spdk:cnode15453", 00:10:09.640 "tgt_name": "foobar", 00:10:09.640 "method": "nvmf_create_subsystem", 00:10:09.640 "req_id": 1 00:10:09.640 } 00:10:09.640 Got JSON-RPC error response 00:10:09.640 response: 00:10:09.640 { 00:10:09.640 "code": -32603, 00:10:09.640 "message": "Unable to find target foobar" 00:10:09.640 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:10:09.640 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:10:09.640 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode4814 00:10:09.900 [2024-07-15 20:24:02.157781] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4814: invalid serial number 'SPDKISFASTANDAWESOME' 00:10:09.900 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:10:09.900 { 00:10:09.900 "nqn": "nqn.2016-06.io.spdk:cnode4814", 00:10:09.900 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:09.901 "method": "nvmf_create_subsystem", 00:10:09.901 "req_id": 1 00:10:09.901 } 00:10:09.901 Got JSON-RPC error response 00:10:09.901 response: 00:10:09.901 { 00:10:09.901 "code": -32602, 00:10:09.901 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:09.901 }' 00:10:09.901 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:10:09.901 { 00:10:09.901 "nqn": "nqn.2016-06.io.spdk:cnode4814", 00:10:09.901 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:10:09.901 "method": "nvmf_create_subsystem", 00:10:09.901 "req_id": 1 00:10:09.901 } 00:10:09.901 Got JSON-RPC error response 00:10:09.901 response: 00:10:09.901 { 00:10:09.901 "code": -32602, 00:10:09.901 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:10:09.901 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:09.901 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:10:09.901 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode7856 00:10:10.161 [2024-07-15 20:24:02.334356] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7856: invalid model number 'SPDK_Controller' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:10:10.161 { 00:10:10.161 "nqn": "nqn.2016-06.io.spdk:cnode7856", 00:10:10.161 "model_number": "SPDK_Controller\u001f", 00:10:10.161 "method": "nvmf_create_subsystem", 00:10:10.161 "req_id": 1 00:10:10.161 } 00:10:10.161 Got JSON-RPC error response 00:10:10.161 response: 00:10:10.161 { 00:10:10.161 "code": -32602, 00:10:10.161 "message": "Invalid MN SPDK_Controller\u001f" 00:10:10.161 }' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:10:10.161 { 00:10:10.161 "nqn": "nqn.2016-06.io.spdk:cnode7856", 00:10:10.161 "model_number": "SPDK_Controller\u001f", 00:10:10.161 "method": "nvmf_create_subsystem", 00:10:10.161 "req_id": 1 00:10:10.161 } 00:10:10.161 Got JSON-RPC error response 00:10:10.161 response: 00:10:10.161 { 00:10:10.161 "code": -32602, 00:10:10.161 "message": "Invalid MN SPDK_Controller\u001f" 00:10:10.161 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:10:10.161 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ > == \- ]] 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '>lX'\''s"NDO$ohM@bc$k9.#' 00:10:10.162 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '>lX'\''s"NDO$ohM@bc$k9.#' nqn.2016-06.io.spdk:cnode31979 00:10:10.422 [2024-07-15 20:24:02.671432] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31979: invalid serial number '>lX's"NDO$ohM@bc$k9.#' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:10:10.422 { 00:10:10.422 "nqn": "nqn.2016-06.io.spdk:cnode31979", 00:10:10.422 "serial_number": ">lX'\''s\"NDO$ohM@bc$k9.#", 00:10:10.422 "method": "nvmf_create_subsystem", 00:10:10.422 "req_id": 1 00:10:10.422 } 00:10:10.422 Got JSON-RPC error response 00:10:10.422 response: 00:10:10.422 { 00:10:10.422 "code": -32602, 00:10:10.422 "message": "Invalid SN >lX'\''s\"NDO$ohM@bc$k9.#" 00:10:10.422 }' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:10:10.422 { 00:10:10.422 "nqn": "nqn.2016-06.io.spdk:cnode31979", 00:10:10.422 "serial_number": ">lX's\"NDO$ohM@bc$k9.#", 00:10:10.422 "method": "nvmf_create_subsystem", 00:10:10.422 "req_id": 1 00:10:10.422 } 00:10:10.422 Got JSON-RPC error response 00:10:10.422 response: 00:10:10.422 { 00:10:10.422 "code": -32602, 00:10:10.422 "message": "Invalid SN >lX's\"NDO$ohM@bc$k9.#" 00:10:10.422 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:10:10.422 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.423 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.684 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.685 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:10:10.685 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:10:10.685 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:10:10.685 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.685 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.685 20:24:02 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ z == \- ]] 00:10:10.685 20:24:03 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'zuo /dev/null' 00:10:12.773 20:24:04 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.688 20:24:06 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:14.688 00:10:14.688 real 0m14.326s 00:10:14.688 user 0m19.501s 00:10:14.688 sys 0m6.933s 00:10:14.688 20:24:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:14.688 20:24:06 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:10:14.688 ************************************ 00:10:14.688 END TEST nvmf_invalid 00:10:14.688 ************************************ 00:10:14.688 20:24:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:14.688 20:24:07 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:14.688 20:24:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:14.688 20:24:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:14.688 20:24:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:14.949 ************************************ 00:10:14.949 START TEST nvmf_abort 00:10:14.949 ************************************ 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:10:14.949 * Looking for test storage... 00:10:14.949 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:14.949 20:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:10:14.950 20:24:07 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.090 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:23.091 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:23.091 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:23.091 Found net devices under 0000:31:00.0: cvl_0_0 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:23.091 Found net devices under 0000:31:00.1: cvl_0_1 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:23.091 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.091 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.646 ms 00:10:23.091 00:10:23.091 --- 10.0.0.2 ping statistics --- 00:10:23.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.091 rtt min/avg/max/mdev = 0.646/0.646/0.646/0.000 ms 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:23.091 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.091 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:10:23.091 00:10:23.091 --- 10.0.0.1 ping statistics --- 00:10:23.091 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.091 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=1191884 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 1191884 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 1191884 ']' 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:23.091 20:24:15 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:23.352 [2024-07-15 20:24:15.515410] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:23.352 [2024-07-15 20:24:15.515480] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.352 EAL: No free 2048 kB hugepages reported on node 1 00:10:23.352 [2024-07-15 20:24:15.613285] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:23.352 [2024-07-15 20:24:15.708227] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.352 [2024-07-15 20:24:15.708299] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.352 [2024-07-15 20:24:15.708308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.352 [2024-07-15 20:24:15.708315] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.352 [2024-07-15 20:24:15.708321] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.352 [2024-07-15 20:24:15.708464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.352 [2024-07-15 20:24:15.708628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.352 [2024-07-15 20:24:15.708628] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.924 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:23.924 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:10:23.924 20:24:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:23.924 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:23.924 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 [2024-07-15 20:24:16.346039] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 Malloc0 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 Delay0 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 [2024-07-15 20:24:16.424727] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:24.185 20:24:16 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:10:24.185 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.185 [2024-07-15 20:24:16.492744] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:26.727 Initializing NVMe Controllers 00:10:26.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:26.727 controller IO queue size 128 less than required 00:10:26.727 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:10:26.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:10:26.727 Initialization complete. Launching workers. 00:10:26.727 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 33380 00:10:26.727 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 33441, failed to submit 62 00:10:26.727 success 33384, unsuccess 57, failed 0 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:26.727 rmmod nvme_tcp 00:10:26.727 rmmod nvme_fabrics 00:10:26.727 rmmod nvme_keyring 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 1191884 ']' 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 1191884 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 1191884 ']' 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 1191884 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1191884 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1191884' 00:10:26.727 killing process with pid 1191884 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 1191884 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 1191884 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:26.727 20:24:18 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.640 20:24:20 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:28.640 00:10:28.640 real 0m13.798s 00:10:28.640 user 0m13.296s 00:10:28.640 sys 0m6.973s 00:10:28.640 20:24:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:28.640 20:24:20 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:10:28.640 ************************************ 00:10:28.640 END TEST nvmf_abort 00:10:28.640 ************************************ 00:10:28.640 20:24:20 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:10:28.640 20:24:20 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:28.640 20:24:20 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:28.640 20:24:20 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:28.640 20:24:20 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:28.640 ************************************ 00:10:28.640 START TEST nvmf_ns_hotplug_stress 00:10:28.640 ************************************ 00:10:28.640 20:24:20 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:10:28.901 * Looking for test storage... 00:10:28.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:10:28.901 20:24:21 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:10:37.042 Found 0000:31:00.0 (0x8086 - 0x159b) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:10:37.042 Found 0000:31:00.1 (0x8086 - 0x159b) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.042 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:10:37.043 Found net devices under 0000:31:00.0: cvl_0_0 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:10:37.043 Found net devices under 0000:31:00.1: cvl_0_1 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:37.043 20:24:28 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:37.043 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:37.043 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.621 ms 00:10:37.043 00:10:37.043 --- 10.0.0.2 ping statistics --- 00:10:37.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.043 rtt min/avg/max/mdev = 0.621/0.621/0.621/0.000 ms 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:37.043 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:37.043 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:10:37.043 00:10:37.043 --- 10.0.0.1 ping statistics --- 00:10:37.043 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:37.043 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=1197254 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 1197254 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 1197254 ']' 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:37.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:37.043 20:24:29 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.043 [2024-07-15 20:24:29.316422] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:10:37.043 [2024-07-15 20:24:29.316472] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:37.043 EAL: No free 2048 kB hugepages reported on node 1 00:10:37.043 [2024-07-15 20:24:29.409716] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:37.303 [2024-07-15 20:24:29.484113] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:37.303 [2024-07-15 20:24:29.484167] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:37.303 [2024-07-15 20:24:29.484175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:37.303 [2024-07-15 20:24:29.484182] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:37.303 [2024-07-15 20:24:29.484188] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:37.303 [2024-07-15 20:24:29.484336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:37.303 [2024-07-15 20:24:29.484496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:37.303 [2024-07-15 20:24:29.484497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:37.878 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:37.879 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:10:37.879 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:37.879 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:37.879 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:37.879 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:37.879 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:10:37.879 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:38.138 [2024-07-15 20:24:30.270276] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:38.138 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:38.138 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:38.397 [2024-07-15 20:24:30.603744] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:38.397 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:38.657 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:10:38.657 Malloc0 00:10:38.657 20:24:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:38.916 Delay0 00:10:38.916 20:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:39.176 20:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:10:39.176 NULL1 00:10:39.176 20:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:10:39.436 20:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1197854 00:10:39.436 20:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:39.436 20:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:10:39.436 20:24:31 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:39.436 EAL: No free 2048 kB hugepages reported on node 1 00:10:40.820 Read completed with error (sct=0, sc=11) 00:10:40.820 20:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:40.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:10:40.820 20:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:10:40.820 20:24:32 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:10:40.820 true 00:10:40.820 20:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:40.820 20:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:41.754 20:24:33 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.014 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:10:42.014 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:10:42.014 true 00:10:42.014 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:42.014 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.274 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.274 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:10:42.274 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:10:42.534 true 00:10:42.534 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:42.534 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:42.794 20:24:34 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:42.794 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:10:42.794 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:10:43.053 true 00:10:43.054 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:43.054 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.313 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.313 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:10:43.313 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:10:43.573 true 00:10:43.573 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:43.573 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:43.833 20:24:35 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:43.833 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:10:43.833 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:10:44.093 true 00:10:44.093 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:44.093 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.093 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.352 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:10:44.352 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:10:44.612 true 00:10:44.612 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:44.612 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:44.612 20:24:36 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:44.906 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:10:44.906 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:10:45.192 true 00:10:45.192 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:45.192 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.192 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.462 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:10:45.462 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:10:45.462 true 00:10:45.462 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:45.462 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:45.722 20:24:37 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:45.982 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:10:45.982 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:10:45.982 true 00:10:45.982 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:45.982 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.243 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.503 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:10:46.503 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:10:46.503 true 00:10:46.503 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:46.503 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:46.765 20:24:38 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:46.765 20:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:10:46.765 20:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:10:47.025 true 00:10:47.025 20:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:47.025 20:24:39 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:47.965 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.224 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:10:48.224 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:10:48.224 true 00:10:48.224 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:48.224 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:48.483 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:48.483 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:10:48.483 20:24:40 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:10:48.743 true 00:10:48.743 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:48.743 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.004 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.004 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:10:49.004 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:10:49.265 true 00:10:49.265 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:49.265 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:49.525 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:49.525 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:10:49.525 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:10:49.784 true 00:10:49.784 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:49.784 20:24:41 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.044 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.044 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:10:50.044 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:10:50.304 true 00:10:50.304 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:50.304 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.304 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:50.564 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:10:50.564 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:10:50.825 true 00:10:50.825 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:50.825 20:24:42 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:50.825 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.091 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:10:51.091 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:10:51.091 true 00:10:51.353 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:51.353 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.353 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:51.612 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:10:51.612 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:10:51.612 true 00:10:51.612 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:51.612 20:24:43 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:51.872 20:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:52.132 20:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:10:52.132 20:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:10:52.132 true 00:10:52.132 20:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:52.132 20:24:44 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.530 20:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.530 20:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:10:53.530 20:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:10:53.530 true 00:10:53.530 20:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:53.530 20:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:53.789 20:24:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:53.789 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:10:53.789 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:10:54.049 true 00:10:54.049 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:54.049 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.309 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.309 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:10:54.309 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:10:54.568 true 00:10:54.568 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:54.568 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:54.827 20:24:46 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:54.827 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:10:54.827 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:10:55.085 true 00:10:55.085 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:55.085 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.085 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.344 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:10:55.344 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:10:55.604 true 00:10:55.604 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:55.604 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:55.604 20:24:47 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:55.864 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:10:55.864 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:10:56.125 true 00:10:56.125 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:56.125 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.125 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.384 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:10:56.384 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:10:56.384 true 00:10:56.644 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:56.644 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:56.644 20:24:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:56.908 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:10:56.908 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:10:56.908 true 00:10:56.908 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:56.908 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.169 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.430 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:10:57.430 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:10:57.430 true 00:10:57.430 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:57.430 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:57.691 20:24:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:57.952 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:10:57.952 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:10:57.952 true 00:10:57.952 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:57.952 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.211 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.470 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:10:58.470 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:10:58.470 true 00:10:58.470 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:58.470 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:58.730 20:24:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:58.730 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:10:58.730 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:10:58.990 true 00:10:58.990 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:58.990 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:59.251 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:59.251 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:10:59.251 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:10:59.512 true 00:10:59.512 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:10:59.512 20:24:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.456 20:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.456 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:00.456 20:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:11:00.456 20:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:11:00.716 true 00:11:00.716 20:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:00.716 20:24:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:00.716 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:00.977 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:11:00.977 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:11:01.236 true 00:11:01.236 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:01.237 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.237 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:01.496 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:11:01.496 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:11:01.756 true 00:11:01.756 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:01.756 20:24:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:01.756 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.016 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:11:02.016 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:11:02.016 true 00:11:02.275 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:02.275 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:02.275 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:02.535 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:11:02.535 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:11:02.535 true 00:11:02.535 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:02.535 20:24:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.474 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:11:03.474 20:24:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:03.734 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:11:03.734 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:11:03.994 true 00:11:03.994 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:03.994 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:03.994 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.253 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:11:04.253 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:11:04.514 true 00:11:04.514 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:04.514 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:04.514 20:24:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:04.773 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:11:04.773 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:11:05.033 true 00:11:05.033 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:05.033 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.033 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.293 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:11:05.293 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:11:05.553 true 00:11:05.553 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:05.553 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:05.553 20:24:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:05.814 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:11:05.814 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:11:05.814 true 00:11:06.075 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:06.075 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.075 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.335 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:11:06.335 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:11:06.335 true 00:11:06.335 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:06.335 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:06.596 20:24:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:06.857 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:11:06.857 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:11:06.857 true 00:11:06.857 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:06.857 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.117 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.378 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:11:07.378 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:11:07.378 true 00:11:07.378 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:07.378 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:07.662 20:24:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:07.922 20:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:11:07.922 20:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:11:07.923 true 00:11:07.923 20:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:07.923 20:25:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:08.867 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:08.867 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:11:08.867 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:11:09.128 true 00:11:09.129 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:09.129 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.433 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:11:09.433 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:11:09.433 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:11:09.721 Initializing NVMe Controllers 00:11:09.721 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:09.721 Controller IO queue size 128, less than required. 00:11:09.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:09.721 Controller IO queue size 128, less than required. 00:11:09.721 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:11:09.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:09.721 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:11:09.721 Initialization complete. Launching workers. 00:11:09.721 ======================================================== 00:11:09.721 Latency(us) 00:11:09.721 Device Information : IOPS MiB/s Average min max 00:11:09.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 355.64 0.17 92939.33 2290.04 1096563.02 00:11:09.721 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 7992.79 3.90 16014.86 1630.65 421653.22 00:11:09.721 ======================================================== 00:11:09.721 Total : 8348.44 4.08 19291.83 1630.65 1096563.02 00:11:09.721 00:11:09.721 true 00:11:09.721 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1197854 00:11:09.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1197854) - No such process 00:11:09.721 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1197854 00:11:09.721 20:25:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:09.721 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:09.980 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:11:09.981 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:11:09.981 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:11:09.981 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:09.981 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:11:10.240 null0 00:11:10.240 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.240 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.240 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:11:10.240 null1 00:11:10.240 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.240 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.240 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:11:10.501 null2 00:11:10.501 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.501 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.501 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:11:10.501 null3 00:11:10.761 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.761 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.761 20:25:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:11:10.761 null4 00:11:10.761 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:10.761 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:10.761 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:11:11.020 null5 00:11:11.020 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:11.020 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:11.020 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:11:11.020 null6 00:11:11.280 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:11.280 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:11:11.281 null7 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1204274 1204276 1204279 1204282 1204285 1204288 1204291 1204293 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.281 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.542 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:11.803 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.803 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.804 20:25:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:11.804 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.065 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:12.066 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:12.327 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.328 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.328 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.588 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:12.848 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:12.848 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.848 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.848 20:25:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:12.848 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.108 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.366 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.367 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:13.627 20:25:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:13.887 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.146 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.147 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.467 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.467 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.467 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.467 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:11:14.467 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.467 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.468 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.727 20:25:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.727 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.727 rmmod nvme_tcp 00:11:14.987 rmmod nvme_fabrics 00:11:14.987 rmmod nvme_keyring 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 1197254 ']' 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 1197254 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 1197254 ']' 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 1197254 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1197254 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1197254' 00:11:14.987 killing process with pid 1197254 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 1197254 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 1197254 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:14.987 20:25:07 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.528 20:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:17.528 00:11:17.528 real 0m48.429s 00:11:17.528 user 3m10.156s 00:11:17.528 sys 0m15.784s 00:11:17.528 20:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:17.528 20:25:09 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:11:17.528 ************************************ 00:11:17.528 END TEST nvmf_ns_hotplug_stress 00:11:17.528 ************************************ 00:11:17.528 20:25:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:17.528 20:25:09 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:17.528 20:25:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:17.528 20:25:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:17.528 20:25:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:17.528 ************************************ 00:11:17.528 START TEST nvmf_connect_stress 00:11:17.528 ************************************ 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:17.528 * Looking for test storage... 00:11:17.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:17.528 20:25:09 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:17.529 20:25:09 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:25.666 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:25.666 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:25.666 Found net devices under 0000:31:00.0: cvl_0_0 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:25.666 Found net devices under 0000:31:00.1: cvl_0_1 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:25.666 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:25.667 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:25.667 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.773 ms 00:11:25.667 00:11:25.667 --- 10.0.0.2 ping statistics --- 00:11:25.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.667 rtt min/avg/max/mdev = 0.773/0.773/0.773/0.000 ms 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:25.667 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:25.667 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.300 ms 00:11:25.667 00:11:25.667 --- 10.0.0.1 ping statistics --- 00:11:25.667 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:25.667 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=1209902 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 1209902 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 1209902 ']' 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.667 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:25.667 20:25:17 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:25.667 [2024-07-15 20:25:17.727204] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:11:25.667 [2024-07-15 20:25:17.727282] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:25.667 EAL: No free 2048 kB hugepages reported on node 1 00:11:25.667 [2024-07-15 20:25:17.827063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:25.667 [2024-07-15 20:25:17.920261] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:25.667 [2024-07-15 20:25:17.920327] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:25.667 [2024-07-15 20:25:17.920336] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:25.667 [2024-07-15 20:25:17.920343] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:25.667 [2024-07-15 20:25:17.920349] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:25.667 [2024-07-15 20:25:17.920497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:25.667 [2024-07-15 20:25:17.920790] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:25.667 [2024-07-15 20:25:17.920792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 [2024-07-15 20:25:18.562298] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 [2024-07-15 20:25:18.594352] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.239 NULL1 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1210000 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:26.239 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 EAL: No free 2048 kB hugepages reported on node 1 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.500 20:25:18 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:26.760 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:26.760 20:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:26.760 20:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:26.760 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:26.760 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.021 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.021 20:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:27.021 20:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.021 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.021 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.591 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.591 20:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:27.591 20:25:19 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.591 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.591 20:25:19 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:27.851 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:27.851 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:27.851 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:27.851 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:27.851 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.111 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.111 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:28.111 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.111 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.111 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.372 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.372 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:28.372 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.372 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.372 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.632 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:28.632 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:28.632 20:25:20 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.632 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:28.632 20:25:20 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.202 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.202 20:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:29.202 20:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.202 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.202 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.462 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.462 20:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:29.462 20:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.462 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.462 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.723 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.723 20:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:29.723 20:25:21 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.723 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.723 20:25:21 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.983 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:29.983 20:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:29.983 20:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.983 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:29.983 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.244 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.244 20:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:30.244 20:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.244 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.244 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.813 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:30.813 20:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:30.813 20:25:22 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.813 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:30.813 20:25:22 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.072 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.072 20:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:31.072 20:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.072 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.072 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.331 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.331 20:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:31.331 20:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.331 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.331 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.591 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:31.591 20:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:31.591 20:25:23 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.591 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:31.591 20:25:23 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.159 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.159 20:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:32.159 20:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.159 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.159 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.419 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.419 20:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:32.419 20:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.419 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.419 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.684 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.684 20:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:32.684 20:25:24 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.684 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.684 20:25:24 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.946 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:32.946 20:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:32.946 20:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.946 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:32.946 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.204 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.205 20:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:33.205 20:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.205 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.205 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.773 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:33.773 20:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:33.773 20:25:25 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.773 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:33.773 20:25:25 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.033 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.033 20:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:34.033 20:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.033 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.033 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.293 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.293 20:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:34.293 20:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.293 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.293 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.553 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.553 20:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:34.553 20:25:26 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.553 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.553 20:25:26 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.814 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:34.814 20:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:34.814 20:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.814 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:34.814 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.410 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.410 20:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:35.410 20:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.410 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.410 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.670 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.670 20:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:35.670 20:25:27 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.670 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.670 20:25:27 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.932 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:35.932 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:35.932 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.932 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:35.932 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.192 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.193 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:36.193 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.193 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.193 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.453 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1210000 00:11:36.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1210000) - No such process 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1210000 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:36.454 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:36.454 rmmod nvme_tcp 00:11:36.454 rmmod nvme_fabrics 00:11:36.714 rmmod nvme_keyring 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 1209902 ']' 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 1209902 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 1209902 ']' 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 1209902 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1209902 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1209902' 00:11:36.714 killing process with pid 1209902 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 1209902 00:11:36.714 20:25:28 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 1209902 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:36.714 20:25:29 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.259 20:25:31 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:39.259 00:11:39.259 real 0m21.638s 00:11:39.259 user 0m42.164s 00:11:39.259 sys 0m9.192s 00:11:39.259 20:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:39.259 20:25:31 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:39.259 ************************************ 00:11:39.259 END TEST nvmf_connect_stress 00:11:39.259 ************************************ 00:11:39.259 20:25:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:39.259 20:25:31 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.259 20:25:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:39.259 20:25:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:39.259 20:25:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:39.259 ************************************ 00:11:39.259 START TEST nvmf_fused_ordering 00:11:39.259 ************************************ 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:39.259 * Looking for test storage... 00:11:39.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:39.259 20:25:31 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:11:47.468 Found 0000:31:00.0 (0x8086 - 0x159b) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:11:47.468 Found 0000:31:00.1 (0x8086 - 0x159b) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:11:47.468 Found net devices under 0000:31:00.0: cvl_0_0 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:11:47.468 Found net devices under 0000:31:00.1: cvl_0_1 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:47.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:47.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.644 ms 00:11:47.468 00:11:47.468 --- 10.0.0.2 ping statistics --- 00:11:47.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.468 rtt min/avg/max/mdev = 0.644/0.644/0.644/0.000 ms 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:47.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:47.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:11:47.468 00:11:47.468 --- 10.0.0.1 ping statistics --- 00:11:47.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:47.468 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=1216732 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 1216732 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 1216732 ']' 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.468 20:25:39 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:47.468 [2024-07-15 20:25:39.602191] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:11:47.468 [2024-07-15 20:25:39.602289] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.468 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.468 [2024-07-15 20:25:39.704466] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:47.468 [2024-07-15 20:25:39.798118] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.468 [2024-07-15 20:25:39.798181] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.468 [2024-07-15 20:25:39.798190] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.468 [2024-07-15 20:25:39.798197] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.468 [2024-07-15 20:25:39.798203] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.468 [2024-07-15 20:25:39.798227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:48.040 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.040 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:11:48.040 20:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.040 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.040 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.040 20:25:40 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.302 [2024-07-15 20:25:40.425424] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.302 [2024-07-15 20:25:40.449622] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.302 NULL1 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.302 20:25:40 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:48.302 [2024-07-15 20:25:40.518535] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:11:48.302 [2024-07-15 20:25:40.518579] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1217073 ] 00:11:48.302 EAL: No free 2048 kB hugepages reported on node 1 00:11:48.871 Attached to nqn.2016-06.io.spdk:cnode1 00:11:48.871 Namespace ID: 1 size: 1GB 00:11:48.871 fused_ordering(0) 00:11:48.871 fused_ordering(1) 00:11:48.871 fused_ordering(2) 00:11:48.871 fused_ordering(3) 00:11:48.871 fused_ordering(4) 00:11:48.871 fused_ordering(5) 00:11:48.871 fused_ordering(6) 00:11:48.871 fused_ordering(7) 00:11:48.871 fused_ordering(8) 00:11:48.871 fused_ordering(9) 00:11:48.871 fused_ordering(10) 00:11:48.871 fused_ordering(11) 00:11:48.871 fused_ordering(12) 00:11:48.871 fused_ordering(13) 00:11:48.871 fused_ordering(14) 00:11:48.871 fused_ordering(15) 00:11:48.871 fused_ordering(16) 00:11:48.871 fused_ordering(17) 00:11:48.871 fused_ordering(18) 00:11:48.871 fused_ordering(19) 00:11:48.871 fused_ordering(20) 00:11:48.871 fused_ordering(21) 00:11:48.871 fused_ordering(22) 00:11:48.871 fused_ordering(23) 00:11:48.871 fused_ordering(24) 00:11:48.871 fused_ordering(25) 00:11:48.871 fused_ordering(26) 00:11:48.871 fused_ordering(27) 00:11:48.871 fused_ordering(28) 00:11:48.871 fused_ordering(29) 00:11:48.871 fused_ordering(30) 00:11:48.871 fused_ordering(31) 00:11:48.871 fused_ordering(32) 00:11:48.871 fused_ordering(33) 00:11:48.871 fused_ordering(34) 00:11:48.871 fused_ordering(35) 00:11:48.871 fused_ordering(36) 00:11:48.871 fused_ordering(37) 00:11:48.871 fused_ordering(38) 00:11:48.871 fused_ordering(39) 00:11:48.871 fused_ordering(40) 00:11:48.871 fused_ordering(41) 00:11:48.871 fused_ordering(42) 00:11:48.871 fused_ordering(43) 00:11:48.871 fused_ordering(44) 00:11:48.871 fused_ordering(45) 00:11:48.871 fused_ordering(46) 00:11:48.871 fused_ordering(47) 00:11:48.871 fused_ordering(48) 00:11:48.871 fused_ordering(49) 00:11:48.871 fused_ordering(50) 00:11:48.871 fused_ordering(51) 00:11:48.871 fused_ordering(52) 00:11:48.871 fused_ordering(53) 00:11:48.871 fused_ordering(54) 00:11:48.871 fused_ordering(55) 00:11:48.871 fused_ordering(56) 00:11:48.871 fused_ordering(57) 00:11:48.871 fused_ordering(58) 00:11:48.871 fused_ordering(59) 00:11:48.871 fused_ordering(60) 00:11:48.871 fused_ordering(61) 00:11:48.871 fused_ordering(62) 00:11:48.871 fused_ordering(63) 00:11:48.871 fused_ordering(64) 00:11:48.871 fused_ordering(65) 00:11:48.871 fused_ordering(66) 00:11:48.871 fused_ordering(67) 00:11:48.871 fused_ordering(68) 00:11:48.871 fused_ordering(69) 00:11:48.871 fused_ordering(70) 00:11:48.871 fused_ordering(71) 00:11:48.871 fused_ordering(72) 00:11:48.871 fused_ordering(73) 00:11:48.871 fused_ordering(74) 00:11:48.871 fused_ordering(75) 00:11:48.871 fused_ordering(76) 00:11:48.871 fused_ordering(77) 00:11:48.871 fused_ordering(78) 00:11:48.871 fused_ordering(79) 00:11:48.871 fused_ordering(80) 00:11:48.871 fused_ordering(81) 00:11:48.871 fused_ordering(82) 00:11:48.871 fused_ordering(83) 00:11:48.871 fused_ordering(84) 00:11:48.871 fused_ordering(85) 00:11:48.871 fused_ordering(86) 00:11:48.871 fused_ordering(87) 00:11:48.871 fused_ordering(88) 00:11:48.871 fused_ordering(89) 00:11:48.871 fused_ordering(90) 00:11:48.871 fused_ordering(91) 00:11:48.871 fused_ordering(92) 00:11:48.871 fused_ordering(93) 00:11:48.871 fused_ordering(94) 00:11:48.871 fused_ordering(95) 00:11:48.871 fused_ordering(96) 00:11:48.871 fused_ordering(97) 00:11:48.871 fused_ordering(98) 00:11:48.871 fused_ordering(99) 00:11:48.871 fused_ordering(100) 00:11:48.871 fused_ordering(101) 00:11:48.871 fused_ordering(102) 00:11:48.871 fused_ordering(103) 00:11:48.871 fused_ordering(104) 00:11:48.871 fused_ordering(105) 00:11:48.871 fused_ordering(106) 00:11:48.871 fused_ordering(107) 00:11:48.871 fused_ordering(108) 00:11:48.871 fused_ordering(109) 00:11:48.871 fused_ordering(110) 00:11:48.871 fused_ordering(111) 00:11:48.871 fused_ordering(112) 00:11:48.871 fused_ordering(113) 00:11:48.871 fused_ordering(114) 00:11:48.871 fused_ordering(115) 00:11:48.871 fused_ordering(116) 00:11:48.871 fused_ordering(117) 00:11:48.871 fused_ordering(118) 00:11:48.871 fused_ordering(119) 00:11:48.871 fused_ordering(120) 00:11:48.871 fused_ordering(121) 00:11:48.871 fused_ordering(122) 00:11:48.871 fused_ordering(123) 00:11:48.871 fused_ordering(124) 00:11:48.871 fused_ordering(125) 00:11:48.871 fused_ordering(126) 00:11:48.871 fused_ordering(127) 00:11:48.871 fused_ordering(128) 00:11:48.871 fused_ordering(129) 00:11:48.871 fused_ordering(130) 00:11:48.871 fused_ordering(131) 00:11:48.871 fused_ordering(132) 00:11:48.871 fused_ordering(133) 00:11:48.871 fused_ordering(134) 00:11:48.871 fused_ordering(135) 00:11:48.871 fused_ordering(136) 00:11:48.871 fused_ordering(137) 00:11:48.871 fused_ordering(138) 00:11:48.871 fused_ordering(139) 00:11:48.871 fused_ordering(140) 00:11:48.871 fused_ordering(141) 00:11:48.871 fused_ordering(142) 00:11:48.871 fused_ordering(143) 00:11:48.871 fused_ordering(144) 00:11:48.871 fused_ordering(145) 00:11:48.871 fused_ordering(146) 00:11:48.871 fused_ordering(147) 00:11:48.871 fused_ordering(148) 00:11:48.871 fused_ordering(149) 00:11:48.871 fused_ordering(150) 00:11:48.871 fused_ordering(151) 00:11:48.871 fused_ordering(152) 00:11:48.871 fused_ordering(153) 00:11:48.871 fused_ordering(154) 00:11:48.871 fused_ordering(155) 00:11:48.871 fused_ordering(156) 00:11:48.871 fused_ordering(157) 00:11:48.871 fused_ordering(158) 00:11:48.871 fused_ordering(159) 00:11:48.871 fused_ordering(160) 00:11:48.871 fused_ordering(161) 00:11:48.871 fused_ordering(162) 00:11:48.871 fused_ordering(163) 00:11:48.871 fused_ordering(164) 00:11:48.871 fused_ordering(165) 00:11:48.871 fused_ordering(166) 00:11:48.871 fused_ordering(167) 00:11:48.871 fused_ordering(168) 00:11:48.871 fused_ordering(169) 00:11:48.871 fused_ordering(170) 00:11:48.871 fused_ordering(171) 00:11:48.871 fused_ordering(172) 00:11:48.871 fused_ordering(173) 00:11:48.871 fused_ordering(174) 00:11:48.871 fused_ordering(175) 00:11:48.871 fused_ordering(176) 00:11:48.871 fused_ordering(177) 00:11:48.871 fused_ordering(178) 00:11:48.871 fused_ordering(179) 00:11:48.871 fused_ordering(180) 00:11:48.871 fused_ordering(181) 00:11:48.871 fused_ordering(182) 00:11:48.871 fused_ordering(183) 00:11:48.871 fused_ordering(184) 00:11:48.871 fused_ordering(185) 00:11:48.871 fused_ordering(186) 00:11:48.871 fused_ordering(187) 00:11:48.871 fused_ordering(188) 00:11:48.871 fused_ordering(189) 00:11:48.871 fused_ordering(190) 00:11:48.871 fused_ordering(191) 00:11:48.871 fused_ordering(192) 00:11:48.871 fused_ordering(193) 00:11:48.871 fused_ordering(194) 00:11:48.871 fused_ordering(195) 00:11:48.871 fused_ordering(196) 00:11:48.871 fused_ordering(197) 00:11:48.871 fused_ordering(198) 00:11:48.871 fused_ordering(199) 00:11:48.871 fused_ordering(200) 00:11:48.871 fused_ordering(201) 00:11:48.871 fused_ordering(202) 00:11:48.871 fused_ordering(203) 00:11:48.871 fused_ordering(204) 00:11:48.871 fused_ordering(205) 00:11:49.150 fused_ordering(206) 00:11:49.150 fused_ordering(207) 00:11:49.150 fused_ordering(208) 00:11:49.150 fused_ordering(209) 00:11:49.150 fused_ordering(210) 00:11:49.150 fused_ordering(211) 00:11:49.150 fused_ordering(212) 00:11:49.150 fused_ordering(213) 00:11:49.150 fused_ordering(214) 00:11:49.150 fused_ordering(215) 00:11:49.150 fused_ordering(216) 00:11:49.150 fused_ordering(217) 00:11:49.150 fused_ordering(218) 00:11:49.150 fused_ordering(219) 00:11:49.150 fused_ordering(220) 00:11:49.150 fused_ordering(221) 00:11:49.150 fused_ordering(222) 00:11:49.150 fused_ordering(223) 00:11:49.150 fused_ordering(224) 00:11:49.150 fused_ordering(225) 00:11:49.150 fused_ordering(226) 00:11:49.150 fused_ordering(227) 00:11:49.150 fused_ordering(228) 00:11:49.150 fused_ordering(229) 00:11:49.150 fused_ordering(230) 00:11:49.150 fused_ordering(231) 00:11:49.150 fused_ordering(232) 00:11:49.150 fused_ordering(233) 00:11:49.150 fused_ordering(234) 00:11:49.150 fused_ordering(235) 00:11:49.150 fused_ordering(236) 00:11:49.150 fused_ordering(237) 00:11:49.150 fused_ordering(238) 00:11:49.150 fused_ordering(239) 00:11:49.150 fused_ordering(240) 00:11:49.150 fused_ordering(241) 00:11:49.150 fused_ordering(242) 00:11:49.150 fused_ordering(243) 00:11:49.150 fused_ordering(244) 00:11:49.150 fused_ordering(245) 00:11:49.150 fused_ordering(246) 00:11:49.150 fused_ordering(247) 00:11:49.150 fused_ordering(248) 00:11:49.150 fused_ordering(249) 00:11:49.150 fused_ordering(250) 00:11:49.150 fused_ordering(251) 00:11:49.150 fused_ordering(252) 00:11:49.150 fused_ordering(253) 00:11:49.150 fused_ordering(254) 00:11:49.150 fused_ordering(255) 00:11:49.150 fused_ordering(256) 00:11:49.150 fused_ordering(257) 00:11:49.150 fused_ordering(258) 00:11:49.150 fused_ordering(259) 00:11:49.150 fused_ordering(260) 00:11:49.150 fused_ordering(261) 00:11:49.150 fused_ordering(262) 00:11:49.150 fused_ordering(263) 00:11:49.150 fused_ordering(264) 00:11:49.150 fused_ordering(265) 00:11:49.150 fused_ordering(266) 00:11:49.150 fused_ordering(267) 00:11:49.150 fused_ordering(268) 00:11:49.150 fused_ordering(269) 00:11:49.150 fused_ordering(270) 00:11:49.150 fused_ordering(271) 00:11:49.150 fused_ordering(272) 00:11:49.150 fused_ordering(273) 00:11:49.150 fused_ordering(274) 00:11:49.150 fused_ordering(275) 00:11:49.150 fused_ordering(276) 00:11:49.150 fused_ordering(277) 00:11:49.150 fused_ordering(278) 00:11:49.150 fused_ordering(279) 00:11:49.150 fused_ordering(280) 00:11:49.150 fused_ordering(281) 00:11:49.150 fused_ordering(282) 00:11:49.150 fused_ordering(283) 00:11:49.150 fused_ordering(284) 00:11:49.150 fused_ordering(285) 00:11:49.150 fused_ordering(286) 00:11:49.150 fused_ordering(287) 00:11:49.150 fused_ordering(288) 00:11:49.150 fused_ordering(289) 00:11:49.150 fused_ordering(290) 00:11:49.150 fused_ordering(291) 00:11:49.150 fused_ordering(292) 00:11:49.150 fused_ordering(293) 00:11:49.150 fused_ordering(294) 00:11:49.150 fused_ordering(295) 00:11:49.150 fused_ordering(296) 00:11:49.150 fused_ordering(297) 00:11:49.150 fused_ordering(298) 00:11:49.150 fused_ordering(299) 00:11:49.150 fused_ordering(300) 00:11:49.150 fused_ordering(301) 00:11:49.150 fused_ordering(302) 00:11:49.150 fused_ordering(303) 00:11:49.150 fused_ordering(304) 00:11:49.150 fused_ordering(305) 00:11:49.150 fused_ordering(306) 00:11:49.150 fused_ordering(307) 00:11:49.150 fused_ordering(308) 00:11:49.150 fused_ordering(309) 00:11:49.150 fused_ordering(310) 00:11:49.150 fused_ordering(311) 00:11:49.150 fused_ordering(312) 00:11:49.150 fused_ordering(313) 00:11:49.150 fused_ordering(314) 00:11:49.150 fused_ordering(315) 00:11:49.150 fused_ordering(316) 00:11:49.150 fused_ordering(317) 00:11:49.150 fused_ordering(318) 00:11:49.150 fused_ordering(319) 00:11:49.150 fused_ordering(320) 00:11:49.150 fused_ordering(321) 00:11:49.150 fused_ordering(322) 00:11:49.150 fused_ordering(323) 00:11:49.150 fused_ordering(324) 00:11:49.150 fused_ordering(325) 00:11:49.150 fused_ordering(326) 00:11:49.150 fused_ordering(327) 00:11:49.150 fused_ordering(328) 00:11:49.150 fused_ordering(329) 00:11:49.150 fused_ordering(330) 00:11:49.150 fused_ordering(331) 00:11:49.150 fused_ordering(332) 00:11:49.150 fused_ordering(333) 00:11:49.150 fused_ordering(334) 00:11:49.150 fused_ordering(335) 00:11:49.150 fused_ordering(336) 00:11:49.150 fused_ordering(337) 00:11:49.150 fused_ordering(338) 00:11:49.150 fused_ordering(339) 00:11:49.150 fused_ordering(340) 00:11:49.150 fused_ordering(341) 00:11:49.150 fused_ordering(342) 00:11:49.150 fused_ordering(343) 00:11:49.150 fused_ordering(344) 00:11:49.150 fused_ordering(345) 00:11:49.150 fused_ordering(346) 00:11:49.150 fused_ordering(347) 00:11:49.150 fused_ordering(348) 00:11:49.150 fused_ordering(349) 00:11:49.150 fused_ordering(350) 00:11:49.150 fused_ordering(351) 00:11:49.150 fused_ordering(352) 00:11:49.150 fused_ordering(353) 00:11:49.150 fused_ordering(354) 00:11:49.150 fused_ordering(355) 00:11:49.150 fused_ordering(356) 00:11:49.150 fused_ordering(357) 00:11:49.150 fused_ordering(358) 00:11:49.150 fused_ordering(359) 00:11:49.150 fused_ordering(360) 00:11:49.150 fused_ordering(361) 00:11:49.150 fused_ordering(362) 00:11:49.150 fused_ordering(363) 00:11:49.150 fused_ordering(364) 00:11:49.150 fused_ordering(365) 00:11:49.150 fused_ordering(366) 00:11:49.151 fused_ordering(367) 00:11:49.151 fused_ordering(368) 00:11:49.151 fused_ordering(369) 00:11:49.151 fused_ordering(370) 00:11:49.151 fused_ordering(371) 00:11:49.151 fused_ordering(372) 00:11:49.151 fused_ordering(373) 00:11:49.151 fused_ordering(374) 00:11:49.151 fused_ordering(375) 00:11:49.151 fused_ordering(376) 00:11:49.151 fused_ordering(377) 00:11:49.151 fused_ordering(378) 00:11:49.151 fused_ordering(379) 00:11:49.151 fused_ordering(380) 00:11:49.151 fused_ordering(381) 00:11:49.151 fused_ordering(382) 00:11:49.151 fused_ordering(383) 00:11:49.151 fused_ordering(384) 00:11:49.151 fused_ordering(385) 00:11:49.151 fused_ordering(386) 00:11:49.151 fused_ordering(387) 00:11:49.151 fused_ordering(388) 00:11:49.151 fused_ordering(389) 00:11:49.151 fused_ordering(390) 00:11:49.151 fused_ordering(391) 00:11:49.151 fused_ordering(392) 00:11:49.151 fused_ordering(393) 00:11:49.151 fused_ordering(394) 00:11:49.151 fused_ordering(395) 00:11:49.151 fused_ordering(396) 00:11:49.151 fused_ordering(397) 00:11:49.151 fused_ordering(398) 00:11:49.151 fused_ordering(399) 00:11:49.151 fused_ordering(400) 00:11:49.151 fused_ordering(401) 00:11:49.151 fused_ordering(402) 00:11:49.151 fused_ordering(403) 00:11:49.151 fused_ordering(404) 00:11:49.151 fused_ordering(405) 00:11:49.151 fused_ordering(406) 00:11:49.151 fused_ordering(407) 00:11:49.151 fused_ordering(408) 00:11:49.151 fused_ordering(409) 00:11:49.151 fused_ordering(410) 00:11:49.410 fused_ordering(411) 00:11:49.410 fused_ordering(412) 00:11:49.410 fused_ordering(413) 00:11:49.410 fused_ordering(414) 00:11:49.410 fused_ordering(415) 00:11:49.410 fused_ordering(416) 00:11:49.410 fused_ordering(417) 00:11:49.410 fused_ordering(418) 00:11:49.410 fused_ordering(419) 00:11:49.410 fused_ordering(420) 00:11:49.410 fused_ordering(421) 00:11:49.410 fused_ordering(422) 00:11:49.410 fused_ordering(423) 00:11:49.410 fused_ordering(424) 00:11:49.410 fused_ordering(425) 00:11:49.410 fused_ordering(426) 00:11:49.410 fused_ordering(427) 00:11:49.410 fused_ordering(428) 00:11:49.410 fused_ordering(429) 00:11:49.410 fused_ordering(430) 00:11:49.410 fused_ordering(431) 00:11:49.410 fused_ordering(432) 00:11:49.410 fused_ordering(433) 00:11:49.410 fused_ordering(434) 00:11:49.410 fused_ordering(435) 00:11:49.410 fused_ordering(436) 00:11:49.410 fused_ordering(437) 00:11:49.410 fused_ordering(438) 00:11:49.410 fused_ordering(439) 00:11:49.410 fused_ordering(440) 00:11:49.410 fused_ordering(441) 00:11:49.410 fused_ordering(442) 00:11:49.410 fused_ordering(443) 00:11:49.410 fused_ordering(444) 00:11:49.410 fused_ordering(445) 00:11:49.410 fused_ordering(446) 00:11:49.410 fused_ordering(447) 00:11:49.410 fused_ordering(448) 00:11:49.410 fused_ordering(449) 00:11:49.410 fused_ordering(450) 00:11:49.410 fused_ordering(451) 00:11:49.410 fused_ordering(452) 00:11:49.410 fused_ordering(453) 00:11:49.410 fused_ordering(454) 00:11:49.410 fused_ordering(455) 00:11:49.410 fused_ordering(456) 00:11:49.410 fused_ordering(457) 00:11:49.410 fused_ordering(458) 00:11:49.410 fused_ordering(459) 00:11:49.410 fused_ordering(460) 00:11:49.410 fused_ordering(461) 00:11:49.410 fused_ordering(462) 00:11:49.410 fused_ordering(463) 00:11:49.410 fused_ordering(464) 00:11:49.410 fused_ordering(465) 00:11:49.410 fused_ordering(466) 00:11:49.410 fused_ordering(467) 00:11:49.410 fused_ordering(468) 00:11:49.410 fused_ordering(469) 00:11:49.410 fused_ordering(470) 00:11:49.410 fused_ordering(471) 00:11:49.410 fused_ordering(472) 00:11:49.410 fused_ordering(473) 00:11:49.410 fused_ordering(474) 00:11:49.410 fused_ordering(475) 00:11:49.410 fused_ordering(476) 00:11:49.410 fused_ordering(477) 00:11:49.410 fused_ordering(478) 00:11:49.410 fused_ordering(479) 00:11:49.410 fused_ordering(480) 00:11:49.410 fused_ordering(481) 00:11:49.410 fused_ordering(482) 00:11:49.410 fused_ordering(483) 00:11:49.410 fused_ordering(484) 00:11:49.410 fused_ordering(485) 00:11:49.410 fused_ordering(486) 00:11:49.410 fused_ordering(487) 00:11:49.410 fused_ordering(488) 00:11:49.410 fused_ordering(489) 00:11:49.410 fused_ordering(490) 00:11:49.410 fused_ordering(491) 00:11:49.410 fused_ordering(492) 00:11:49.410 fused_ordering(493) 00:11:49.410 fused_ordering(494) 00:11:49.410 fused_ordering(495) 00:11:49.410 fused_ordering(496) 00:11:49.410 fused_ordering(497) 00:11:49.410 fused_ordering(498) 00:11:49.410 fused_ordering(499) 00:11:49.410 fused_ordering(500) 00:11:49.410 fused_ordering(501) 00:11:49.410 fused_ordering(502) 00:11:49.410 fused_ordering(503) 00:11:49.410 fused_ordering(504) 00:11:49.410 fused_ordering(505) 00:11:49.410 fused_ordering(506) 00:11:49.410 fused_ordering(507) 00:11:49.410 fused_ordering(508) 00:11:49.410 fused_ordering(509) 00:11:49.410 fused_ordering(510) 00:11:49.410 fused_ordering(511) 00:11:49.410 fused_ordering(512) 00:11:49.410 fused_ordering(513) 00:11:49.410 fused_ordering(514) 00:11:49.410 fused_ordering(515) 00:11:49.410 fused_ordering(516) 00:11:49.410 fused_ordering(517) 00:11:49.410 fused_ordering(518) 00:11:49.410 fused_ordering(519) 00:11:49.410 fused_ordering(520) 00:11:49.410 fused_ordering(521) 00:11:49.410 fused_ordering(522) 00:11:49.410 fused_ordering(523) 00:11:49.410 fused_ordering(524) 00:11:49.410 fused_ordering(525) 00:11:49.410 fused_ordering(526) 00:11:49.410 fused_ordering(527) 00:11:49.410 fused_ordering(528) 00:11:49.410 fused_ordering(529) 00:11:49.410 fused_ordering(530) 00:11:49.410 fused_ordering(531) 00:11:49.410 fused_ordering(532) 00:11:49.410 fused_ordering(533) 00:11:49.410 fused_ordering(534) 00:11:49.410 fused_ordering(535) 00:11:49.410 fused_ordering(536) 00:11:49.410 fused_ordering(537) 00:11:49.410 fused_ordering(538) 00:11:49.410 fused_ordering(539) 00:11:49.410 fused_ordering(540) 00:11:49.410 fused_ordering(541) 00:11:49.410 fused_ordering(542) 00:11:49.410 fused_ordering(543) 00:11:49.410 fused_ordering(544) 00:11:49.410 fused_ordering(545) 00:11:49.410 fused_ordering(546) 00:11:49.410 fused_ordering(547) 00:11:49.410 fused_ordering(548) 00:11:49.410 fused_ordering(549) 00:11:49.410 fused_ordering(550) 00:11:49.410 fused_ordering(551) 00:11:49.410 fused_ordering(552) 00:11:49.410 fused_ordering(553) 00:11:49.410 fused_ordering(554) 00:11:49.410 fused_ordering(555) 00:11:49.410 fused_ordering(556) 00:11:49.410 fused_ordering(557) 00:11:49.410 fused_ordering(558) 00:11:49.410 fused_ordering(559) 00:11:49.410 fused_ordering(560) 00:11:49.410 fused_ordering(561) 00:11:49.410 fused_ordering(562) 00:11:49.410 fused_ordering(563) 00:11:49.410 fused_ordering(564) 00:11:49.410 fused_ordering(565) 00:11:49.410 fused_ordering(566) 00:11:49.410 fused_ordering(567) 00:11:49.410 fused_ordering(568) 00:11:49.410 fused_ordering(569) 00:11:49.410 fused_ordering(570) 00:11:49.410 fused_ordering(571) 00:11:49.410 fused_ordering(572) 00:11:49.410 fused_ordering(573) 00:11:49.410 fused_ordering(574) 00:11:49.410 fused_ordering(575) 00:11:49.410 fused_ordering(576) 00:11:49.410 fused_ordering(577) 00:11:49.410 fused_ordering(578) 00:11:49.410 fused_ordering(579) 00:11:49.410 fused_ordering(580) 00:11:49.410 fused_ordering(581) 00:11:49.410 fused_ordering(582) 00:11:49.410 fused_ordering(583) 00:11:49.410 fused_ordering(584) 00:11:49.410 fused_ordering(585) 00:11:49.410 fused_ordering(586) 00:11:49.410 fused_ordering(587) 00:11:49.410 fused_ordering(588) 00:11:49.410 fused_ordering(589) 00:11:49.410 fused_ordering(590) 00:11:49.410 fused_ordering(591) 00:11:49.410 fused_ordering(592) 00:11:49.410 fused_ordering(593) 00:11:49.410 fused_ordering(594) 00:11:49.410 fused_ordering(595) 00:11:49.410 fused_ordering(596) 00:11:49.410 fused_ordering(597) 00:11:49.410 fused_ordering(598) 00:11:49.410 fused_ordering(599) 00:11:49.410 fused_ordering(600) 00:11:49.410 fused_ordering(601) 00:11:49.410 fused_ordering(602) 00:11:49.410 fused_ordering(603) 00:11:49.410 fused_ordering(604) 00:11:49.410 fused_ordering(605) 00:11:49.410 fused_ordering(606) 00:11:49.410 fused_ordering(607) 00:11:49.410 fused_ordering(608) 00:11:49.410 fused_ordering(609) 00:11:49.410 fused_ordering(610) 00:11:49.410 fused_ordering(611) 00:11:49.410 fused_ordering(612) 00:11:49.410 fused_ordering(613) 00:11:49.410 fused_ordering(614) 00:11:49.410 fused_ordering(615) 00:11:49.980 fused_ordering(616) 00:11:49.980 fused_ordering(617) 00:11:49.980 fused_ordering(618) 00:11:49.980 fused_ordering(619) 00:11:49.980 fused_ordering(620) 00:11:49.980 fused_ordering(621) 00:11:49.980 fused_ordering(622) 00:11:49.980 fused_ordering(623) 00:11:49.980 fused_ordering(624) 00:11:49.980 fused_ordering(625) 00:11:49.980 fused_ordering(626) 00:11:49.980 fused_ordering(627) 00:11:49.980 fused_ordering(628) 00:11:49.980 fused_ordering(629) 00:11:49.980 fused_ordering(630) 00:11:49.980 fused_ordering(631) 00:11:49.980 fused_ordering(632) 00:11:49.980 fused_ordering(633) 00:11:49.980 fused_ordering(634) 00:11:49.980 fused_ordering(635) 00:11:49.980 fused_ordering(636) 00:11:49.980 fused_ordering(637) 00:11:49.980 fused_ordering(638) 00:11:49.980 fused_ordering(639) 00:11:49.980 fused_ordering(640) 00:11:49.980 fused_ordering(641) 00:11:49.980 fused_ordering(642) 00:11:49.980 fused_ordering(643) 00:11:49.980 fused_ordering(644) 00:11:49.980 fused_ordering(645) 00:11:49.980 fused_ordering(646) 00:11:49.980 fused_ordering(647) 00:11:49.980 fused_ordering(648) 00:11:49.980 fused_ordering(649) 00:11:49.980 fused_ordering(650) 00:11:49.980 fused_ordering(651) 00:11:49.980 fused_ordering(652) 00:11:49.980 fused_ordering(653) 00:11:49.980 fused_ordering(654) 00:11:49.980 fused_ordering(655) 00:11:49.980 fused_ordering(656) 00:11:49.980 fused_ordering(657) 00:11:49.980 fused_ordering(658) 00:11:49.980 fused_ordering(659) 00:11:49.980 fused_ordering(660) 00:11:49.980 fused_ordering(661) 00:11:49.980 fused_ordering(662) 00:11:49.980 fused_ordering(663) 00:11:49.980 fused_ordering(664) 00:11:49.980 fused_ordering(665) 00:11:49.980 fused_ordering(666) 00:11:49.980 fused_ordering(667) 00:11:49.980 fused_ordering(668) 00:11:49.980 fused_ordering(669) 00:11:49.980 fused_ordering(670) 00:11:49.980 fused_ordering(671) 00:11:49.980 fused_ordering(672) 00:11:49.980 fused_ordering(673) 00:11:49.980 fused_ordering(674) 00:11:49.980 fused_ordering(675) 00:11:49.980 fused_ordering(676) 00:11:49.980 fused_ordering(677) 00:11:49.980 fused_ordering(678) 00:11:49.980 fused_ordering(679) 00:11:49.980 fused_ordering(680) 00:11:49.980 fused_ordering(681) 00:11:49.980 fused_ordering(682) 00:11:49.980 fused_ordering(683) 00:11:49.980 fused_ordering(684) 00:11:49.980 fused_ordering(685) 00:11:49.980 fused_ordering(686) 00:11:49.980 fused_ordering(687) 00:11:49.980 fused_ordering(688) 00:11:49.980 fused_ordering(689) 00:11:49.980 fused_ordering(690) 00:11:49.980 fused_ordering(691) 00:11:49.980 fused_ordering(692) 00:11:49.980 fused_ordering(693) 00:11:49.980 fused_ordering(694) 00:11:49.980 fused_ordering(695) 00:11:49.980 fused_ordering(696) 00:11:49.980 fused_ordering(697) 00:11:49.980 fused_ordering(698) 00:11:49.980 fused_ordering(699) 00:11:49.980 fused_ordering(700) 00:11:49.980 fused_ordering(701) 00:11:49.980 fused_ordering(702) 00:11:49.980 fused_ordering(703) 00:11:49.980 fused_ordering(704) 00:11:49.980 fused_ordering(705) 00:11:49.980 fused_ordering(706) 00:11:49.980 fused_ordering(707) 00:11:49.980 fused_ordering(708) 00:11:49.980 fused_ordering(709) 00:11:49.980 fused_ordering(710) 00:11:49.980 fused_ordering(711) 00:11:49.980 fused_ordering(712) 00:11:49.980 fused_ordering(713) 00:11:49.980 fused_ordering(714) 00:11:49.980 fused_ordering(715) 00:11:49.980 fused_ordering(716) 00:11:49.980 fused_ordering(717) 00:11:49.980 fused_ordering(718) 00:11:49.980 fused_ordering(719) 00:11:49.980 fused_ordering(720) 00:11:49.980 fused_ordering(721) 00:11:49.980 fused_ordering(722) 00:11:49.980 fused_ordering(723) 00:11:49.980 fused_ordering(724) 00:11:49.980 fused_ordering(725) 00:11:49.980 fused_ordering(726) 00:11:49.980 fused_ordering(727) 00:11:49.980 fused_ordering(728) 00:11:49.980 fused_ordering(729) 00:11:49.980 fused_ordering(730) 00:11:49.980 fused_ordering(731) 00:11:49.980 fused_ordering(732) 00:11:49.980 fused_ordering(733) 00:11:49.980 fused_ordering(734) 00:11:49.980 fused_ordering(735) 00:11:49.980 fused_ordering(736) 00:11:49.980 fused_ordering(737) 00:11:49.980 fused_ordering(738) 00:11:49.980 fused_ordering(739) 00:11:49.980 fused_ordering(740) 00:11:49.980 fused_ordering(741) 00:11:49.980 fused_ordering(742) 00:11:49.980 fused_ordering(743) 00:11:49.980 fused_ordering(744) 00:11:49.980 fused_ordering(745) 00:11:49.980 fused_ordering(746) 00:11:49.980 fused_ordering(747) 00:11:49.980 fused_ordering(748) 00:11:49.980 fused_ordering(749) 00:11:49.980 fused_ordering(750) 00:11:49.980 fused_ordering(751) 00:11:49.980 fused_ordering(752) 00:11:49.980 fused_ordering(753) 00:11:49.980 fused_ordering(754) 00:11:49.980 fused_ordering(755) 00:11:49.980 fused_ordering(756) 00:11:49.980 fused_ordering(757) 00:11:49.980 fused_ordering(758) 00:11:49.980 fused_ordering(759) 00:11:49.980 fused_ordering(760) 00:11:49.980 fused_ordering(761) 00:11:49.980 fused_ordering(762) 00:11:49.980 fused_ordering(763) 00:11:49.980 fused_ordering(764) 00:11:49.980 fused_ordering(765) 00:11:49.980 fused_ordering(766) 00:11:49.980 fused_ordering(767) 00:11:49.980 fused_ordering(768) 00:11:49.980 fused_ordering(769) 00:11:49.980 fused_ordering(770) 00:11:49.980 fused_ordering(771) 00:11:49.980 fused_ordering(772) 00:11:49.980 fused_ordering(773) 00:11:49.980 fused_ordering(774) 00:11:49.980 fused_ordering(775) 00:11:49.980 fused_ordering(776) 00:11:49.980 fused_ordering(777) 00:11:49.980 fused_ordering(778) 00:11:49.980 fused_ordering(779) 00:11:49.980 fused_ordering(780) 00:11:49.980 fused_ordering(781) 00:11:49.980 fused_ordering(782) 00:11:49.980 fused_ordering(783) 00:11:49.980 fused_ordering(784) 00:11:49.980 fused_ordering(785) 00:11:49.980 fused_ordering(786) 00:11:49.980 fused_ordering(787) 00:11:49.980 fused_ordering(788) 00:11:49.980 fused_ordering(789) 00:11:49.980 fused_ordering(790) 00:11:49.980 fused_ordering(791) 00:11:49.980 fused_ordering(792) 00:11:49.980 fused_ordering(793) 00:11:49.980 fused_ordering(794) 00:11:49.980 fused_ordering(795) 00:11:49.980 fused_ordering(796) 00:11:49.980 fused_ordering(797) 00:11:49.980 fused_ordering(798) 00:11:49.980 fused_ordering(799) 00:11:49.980 fused_ordering(800) 00:11:49.980 fused_ordering(801) 00:11:49.980 fused_ordering(802) 00:11:49.980 fused_ordering(803) 00:11:49.980 fused_ordering(804) 00:11:49.980 fused_ordering(805) 00:11:49.980 fused_ordering(806) 00:11:49.980 fused_ordering(807) 00:11:49.980 fused_ordering(808) 00:11:49.980 fused_ordering(809) 00:11:49.980 fused_ordering(810) 00:11:49.980 fused_ordering(811) 00:11:49.980 fused_ordering(812) 00:11:49.980 fused_ordering(813) 00:11:49.980 fused_ordering(814) 00:11:49.980 fused_ordering(815) 00:11:49.980 fused_ordering(816) 00:11:49.980 fused_ordering(817) 00:11:49.980 fused_ordering(818) 00:11:49.980 fused_ordering(819) 00:11:49.980 fused_ordering(820) 00:11:50.571 fused_ordering(821) 00:11:50.571 fused_ordering(822) 00:11:50.571 fused_ordering(823) 00:11:50.571 fused_ordering(824) 00:11:50.571 fused_ordering(825) 00:11:50.571 fused_ordering(826) 00:11:50.571 fused_ordering(827) 00:11:50.571 fused_ordering(828) 00:11:50.571 fused_ordering(829) 00:11:50.571 fused_ordering(830) 00:11:50.571 fused_ordering(831) 00:11:50.571 fused_ordering(832) 00:11:50.571 fused_ordering(833) 00:11:50.571 fused_ordering(834) 00:11:50.571 fused_ordering(835) 00:11:50.571 fused_ordering(836) 00:11:50.571 fused_ordering(837) 00:11:50.571 fused_ordering(838) 00:11:50.571 fused_ordering(839) 00:11:50.571 fused_ordering(840) 00:11:50.571 fused_ordering(841) 00:11:50.571 fused_ordering(842) 00:11:50.571 fused_ordering(843) 00:11:50.571 fused_ordering(844) 00:11:50.571 fused_ordering(845) 00:11:50.571 fused_ordering(846) 00:11:50.571 fused_ordering(847) 00:11:50.571 fused_ordering(848) 00:11:50.571 fused_ordering(849) 00:11:50.571 fused_ordering(850) 00:11:50.571 fused_ordering(851) 00:11:50.571 fused_ordering(852) 00:11:50.571 fused_ordering(853) 00:11:50.571 fused_ordering(854) 00:11:50.571 fused_ordering(855) 00:11:50.571 fused_ordering(856) 00:11:50.571 fused_ordering(857) 00:11:50.571 fused_ordering(858) 00:11:50.571 fused_ordering(859) 00:11:50.571 fused_ordering(860) 00:11:50.571 fused_ordering(861) 00:11:50.571 fused_ordering(862) 00:11:50.571 fused_ordering(863) 00:11:50.571 fused_ordering(864) 00:11:50.571 fused_ordering(865) 00:11:50.571 fused_ordering(866) 00:11:50.571 fused_ordering(867) 00:11:50.571 fused_ordering(868) 00:11:50.571 fused_ordering(869) 00:11:50.571 fused_ordering(870) 00:11:50.571 fused_ordering(871) 00:11:50.571 fused_ordering(872) 00:11:50.571 fused_ordering(873) 00:11:50.571 fused_ordering(874) 00:11:50.571 fused_ordering(875) 00:11:50.571 fused_ordering(876) 00:11:50.571 fused_ordering(877) 00:11:50.571 fused_ordering(878) 00:11:50.571 fused_ordering(879) 00:11:50.571 fused_ordering(880) 00:11:50.571 fused_ordering(881) 00:11:50.571 fused_ordering(882) 00:11:50.571 fused_ordering(883) 00:11:50.571 fused_ordering(884) 00:11:50.571 fused_ordering(885) 00:11:50.571 fused_ordering(886) 00:11:50.571 fused_ordering(887) 00:11:50.571 fused_ordering(888) 00:11:50.571 fused_ordering(889) 00:11:50.571 fused_ordering(890) 00:11:50.571 fused_ordering(891) 00:11:50.571 fused_ordering(892) 00:11:50.571 fused_ordering(893) 00:11:50.571 fused_ordering(894) 00:11:50.571 fused_ordering(895) 00:11:50.571 fused_ordering(896) 00:11:50.571 fused_ordering(897) 00:11:50.571 fused_ordering(898) 00:11:50.571 fused_ordering(899) 00:11:50.571 fused_ordering(900) 00:11:50.571 fused_ordering(901) 00:11:50.571 fused_ordering(902) 00:11:50.571 fused_ordering(903) 00:11:50.571 fused_ordering(904) 00:11:50.571 fused_ordering(905) 00:11:50.571 fused_ordering(906) 00:11:50.571 fused_ordering(907) 00:11:50.571 fused_ordering(908) 00:11:50.571 fused_ordering(909) 00:11:50.571 fused_ordering(910) 00:11:50.571 fused_ordering(911) 00:11:50.571 fused_ordering(912) 00:11:50.571 fused_ordering(913) 00:11:50.571 fused_ordering(914) 00:11:50.571 fused_ordering(915) 00:11:50.571 fused_ordering(916) 00:11:50.571 fused_ordering(917) 00:11:50.571 fused_ordering(918) 00:11:50.571 fused_ordering(919) 00:11:50.571 fused_ordering(920) 00:11:50.571 fused_ordering(921) 00:11:50.571 fused_ordering(922) 00:11:50.571 fused_ordering(923) 00:11:50.571 fused_ordering(924) 00:11:50.571 fused_ordering(925) 00:11:50.571 fused_ordering(926) 00:11:50.571 fused_ordering(927) 00:11:50.571 fused_ordering(928) 00:11:50.571 fused_ordering(929) 00:11:50.571 fused_ordering(930) 00:11:50.571 fused_ordering(931) 00:11:50.571 fused_ordering(932) 00:11:50.571 fused_ordering(933) 00:11:50.571 fused_ordering(934) 00:11:50.571 fused_ordering(935) 00:11:50.571 fused_ordering(936) 00:11:50.571 fused_ordering(937) 00:11:50.571 fused_ordering(938) 00:11:50.571 fused_ordering(939) 00:11:50.571 fused_ordering(940) 00:11:50.571 fused_ordering(941) 00:11:50.571 fused_ordering(942) 00:11:50.571 fused_ordering(943) 00:11:50.571 fused_ordering(944) 00:11:50.571 fused_ordering(945) 00:11:50.571 fused_ordering(946) 00:11:50.571 fused_ordering(947) 00:11:50.571 fused_ordering(948) 00:11:50.571 fused_ordering(949) 00:11:50.571 fused_ordering(950) 00:11:50.571 fused_ordering(951) 00:11:50.571 fused_ordering(952) 00:11:50.571 fused_ordering(953) 00:11:50.571 fused_ordering(954) 00:11:50.571 fused_ordering(955) 00:11:50.571 fused_ordering(956) 00:11:50.571 fused_ordering(957) 00:11:50.571 fused_ordering(958) 00:11:50.571 fused_ordering(959) 00:11:50.571 fused_ordering(960) 00:11:50.571 fused_ordering(961) 00:11:50.571 fused_ordering(962) 00:11:50.571 fused_ordering(963) 00:11:50.571 fused_ordering(964) 00:11:50.571 fused_ordering(965) 00:11:50.571 fused_ordering(966) 00:11:50.571 fused_ordering(967) 00:11:50.571 fused_ordering(968) 00:11:50.571 fused_ordering(969) 00:11:50.571 fused_ordering(970) 00:11:50.571 fused_ordering(971) 00:11:50.571 fused_ordering(972) 00:11:50.571 fused_ordering(973) 00:11:50.571 fused_ordering(974) 00:11:50.571 fused_ordering(975) 00:11:50.571 fused_ordering(976) 00:11:50.571 fused_ordering(977) 00:11:50.571 fused_ordering(978) 00:11:50.571 fused_ordering(979) 00:11:50.571 fused_ordering(980) 00:11:50.571 fused_ordering(981) 00:11:50.571 fused_ordering(982) 00:11:50.571 fused_ordering(983) 00:11:50.571 fused_ordering(984) 00:11:50.571 fused_ordering(985) 00:11:50.571 fused_ordering(986) 00:11:50.571 fused_ordering(987) 00:11:50.571 fused_ordering(988) 00:11:50.571 fused_ordering(989) 00:11:50.571 fused_ordering(990) 00:11:50.571 fused_ordering(991) 00:11:50.571 fused_ordering(992) 00:11:50.571 fused_ordering(993) 00:11:50.571 fused_ordering(994) 00:11:50.571 fused_ordering(995) 00:11:50.571 fused_ordering(996) 00:11:50.571 fused_ordering(997) 00:11:50.571 fused_ordering(998) 00:11:50.571 fused_ordering(999) 00:11:50.571 fused_ordering(1000) 00:11:50.571 fused_ordering(1001) 00:11:50.571 fused_ordering(1002) 00:11:50.571 fused_ordering(1003) 00:11:50.571 fused_ordering(1004) 00:11:50.571 fused_ordering(1005) 00:11:50.571 fused_ordering(1006) 00:11:50.571 fused_ordering(1007) 00:11:50.571 fused_ordering(1008) 00:11:50.571 fused_ordering(1009) 00:11:50.571 fused_ordering(1010) 00:11:50.571 fused_ordering(1011) 00:11:50.571 fused_ordering(1012) 00:11:50.571 fused_ordering(1013) 00:11:50.571 fused_ordering(1014) 00:11:50.571 fused_ordering(1015) 00:11:50.571 fused_ordering(1016) 00:11:50.571 fused_ordering(1017) 00:11:50.571 fused_ordering(1018) 00:11:50.571 fused_ordering(1019) 00:11:50.571 fused_ordering(1020) 00:11:50.571 fused_ordering(1021) 00:11:50.571 fused_ordering(1022) 00:11:50.571 fused_ordering(1023) 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:50.571 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:50.571 rmmod nvme_tcp 00:11:50.571 rmmod nvme_fabrics 00:11:50.830 rmmod nvme_keyring 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 1216732 ']' 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 1216732 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 1216732 ']' 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 1216732 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:50.830 20:25:42 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1216732 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1216732' 00:11:50.830 killing process with pid 1216732 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 1216732 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 1216732 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:50.830 20:25:43 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.369 20:25:45 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:53.369 00:11:53.369 real 0m14.047s 00:11:53.369 user 0m7.249s 00:11:53.369 sys 0m7.571s 00:11:53.369 20:25:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:53.369 20:25:45 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:53.369 ************************************ 00:11:53.369 END TEST nvmf_fused_ordering 00:11:53.369 ************************************ 00:11:53.369 20:25:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:53.369 20:25:45 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:53.369 20:25:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:53.369 20:25:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.369 20:25:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:53.369 ************************************ 00:11:53.369 START TEST nvmf_delete_subsystem 00:11:53.369 ************************************ 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:11:53.370 * Looking for test storage... 00:11:53.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:53.370 20:25:45 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:01.514 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:01.515 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:01.515 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:01.515 Found net devices under 0000:31:00.0: cvl_0_0 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:01.515 Found net devices under 0000:31:00.1: cvl_0_1 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:01.515 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:01.515 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.718 ms 00:12:01.515 00:12:01.515 --- 10.0.0.2 ping statistics --- 00:12:01.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.515 rtt min/avg/max/mdev = 0.718/0.718/0.718/0.000 ms 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:01.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:01.515 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:12:01.515 00:12:01.515 --- 10.0.0.1 ping statistics --- 00:12:01.515 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:01.515 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=1222094 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 1222094 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 1222094 ']' 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:01.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:01.515 20:25:53 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:01.515 [2024-07-15 20:25:53.636012] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:01.515 [2024-07-15 20:25:53.636078] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:01.515 EAL: No free 2048 kB hugepages reported on node 1 00:12:01.515 [2024-07-15 20:25:53.718733] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:12:01.515 [2024-07-15 20:25:53.793829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:01.515 [2024-07-15 20:25:53.793870] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:01.515 [2024-07-15 20:25:53.793877] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:01.515 [2024-07-15 20:25:53.793883] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:01.515 [2024-07-15 20:25:53.793889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:01.515 [2024-07-15 20:25:53.794031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:01.515 [2024-07-15 20:25:53.794033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 [2024-07-15 20:25:54.433492] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 [2024-07-15 20:25:54.449627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.088 NULL1 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.088 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.349 Delay0 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1222445 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:12:02.349 20:25:54 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:02.349 EAL: No free 2048 kB hugepages reported on node 1 00:12:02.349 [2024-07-15 20:25:54.534280] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:04.266 20:25:56 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.266 20:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:04.266 20:25:56 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 starting I/O failed: -6 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 [2024-07-15 20:25:56.618743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242be90 is same with the state(5) to be set 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Write completed with error (sct=0, sc=8) 00:12:04.266 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 starting I/O failed: -6 00:12:04.267 [2024-07-15 20:25:56.622118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7effb4000c00 is same with the state(5) to be set 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Write completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:04.267 Read completed with error (sct=0, sc=8) 00:12:05.652 [2024-07-15 20:25:57.592345] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x240a500 is same with the state(5) to be set 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Write completed with error (sct=0, sc=8) 00:12:05.652 Write completed with error (sct=0, sc=8) 00:12:05.652 Write completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Write completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Write completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Write completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Write completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.652 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 [2024-07-15 20:25:57.622244] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242ad00 is same with the state(5) to be set 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 [2024-07-15 20:25:57.622915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x242bcb0 is same with the state(5) to be set 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 [2024-07-15 20:25:57.623982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7effb400cfe0 is same with the state(5) to be set 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Read completed with error (sct=0, sc=8) 00:12:05.653 Write completed with error (sct=0, sc=8) 00:12:05.653 [2024-07-15 20:25:57.624093] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7effb400d740 is same with the state(5) to be set 00:12:05.653 Initializing NVMe Controllers 00:12:05.653 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:05.653 Controller IO queue size 128, less than required. 00:12:05.653 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:05.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:05.653 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:05.653 Initialization complete. Launching workers. 00:12:05.653 ======================================================== 00:12:05.653 Latency(us) 00:12:05.653 Device Information : IOPS MiB/s Average min max 00:12:05.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 170.30 0.08 892157.81 218.07 1007161.25 00:12:05.653 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 160.34 0.08 945156.54 303.98 2001075.20 00:12:05.653 ======================================================== 00:12:05.653 Total : 330.63 0.16 917859.00 218.07 2001075.20 00:12:05.653 00:12:05.653 [2024-07-15 20:25:57.624699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240a500 (9): Bad file descriptor 00:12:05.653 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:12:05.653 20:25:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.653 20:25:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:12:05.653 20:25:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1222445 00:12:05.653 20:25:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1222445 00:12:05.915 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1222445) - No such process 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1222445 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 1222445 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 1222445 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.915 [2024-07-15 20:25:58.155152] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1223121 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:05.915 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:05.915 EAL: No free 2048 kB hugepages reported on node 1 00:12:05.915 [2024-07-15 20:25:58.225462] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:12:06.489 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:06.489 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:06.489 20:25:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:07.061 20:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:07.061 20:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:07.061 20:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:07.321 20:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:07.321 20:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:07.321 20:25:59 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:07.892 20:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:07.892 20:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:07.892 20:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:08.463 20:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:08.463 20:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:08.463 20:26:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:09.033 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:09.033 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:09.033 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:12:09.033 Initializing NVMe Controllers 00:12:09.033 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:09.033 Controller IO queue size 128, less than required. 00:12:09.033 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:12:09.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:12:09.033 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:12:09.033 Initialization complete. Launching workers. 00:12:09.033 ======================================================== 00:12:09.033 Latency(us) 00:12:09.033 Device Information : IOPS MiB/s Average min max 00:12:09.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002159.04 1000181.33 1008321.41 00:12:09.033 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1002991.34 1000286.94 1008986.38 00:12:09.033 ======================================================== 00:12:09.033 Total : 256.00 0.12 1002575.19 1000181.33 1008986.38 00:12:09.033 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1223121 00:12:09.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1223121) - No such process 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1223121 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.603 rmmod nvme_tcp 00:12:09.603 rmmod nvme_fabrics 00:12:09.603 rmmod nvme_keyring 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 1222094 ']' 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 1222094 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 1222094 ']' 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 1222094 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1222094 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1222094' 00:12:09.603 killing process with pid 1222094 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 1222094 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 1222094 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:09.603 20:26:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.148 20:26:04 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.148 00:12:12.148 real 0m18.727s 00:12:12.148 user 0m30.576s 00:12:12.148 sys 0m6.897s 00:12:12.148 20:26:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.148 20:26:04 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:12:12.148 ************************************ 00:12:12.148 END TEST nvmf_delete_subsystem 00:12:12.148 ************************************ 00:12:12.148 20:26:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:12.148 20:26:04 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:12.148 20:26:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:12.148 20:26:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.148 20:26:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:12.148 ************************************ 00:12:12.148 START TEST nvmf_ns_masking 00:12:12.148 ************************************ 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:12.148 * Looking for test storage... 00:12:12.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.148 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=e63b6330-da7f-4c64-83c8-a94998b0f3c3 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=a2a01e11-9c5f-4572-8914-0d54202f0bb3 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f9b43e04-3768-43b9-bbd5-4f5f7456c189 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.149 20:26:04 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:20.303 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:20.303 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:20.303 Found net devices under 0000:31:00.0: cvl_0_0 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:20.303 Found net devices under 0000:31:00.1: cvl_0_1 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:20.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:20.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.635 ms 00:12:20.303 00:12:20.303 --- 10.0.0.2 ping statistics --- 00:12:20.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.303 rtt min/avg/max/mdev = 0.635/0.635/0.635/0.000 ms 00:12:20.303 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:20.304 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:20.304 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.448 ms 00:12:20.304 00:12:20.304 --- 10.0.0.1 ping statistics --- 00:12:20.304 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:20.304 rtt min/avg/max/mdev = 0.448/0.448/0.448/0.000 ms 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=1228483 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 1228483 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1228483 ']' 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:20.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:20.304 20:26:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:20.304 [2024-07-15 20:26:12.569571] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:20.304 [2024-07-15 20:26:12.569635] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:20.304 EAL: No free 2048 kB hugepages reported on node 1 00:12:20.304 [2024-07-15 20:26:12.651796] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:20.566 [2024-07-15 20:26:12.724013] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:20.566 [2024-07-15 20:26:12.724051] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:20.566 [2024-07-15 20:26:12.724059] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:20.566 [2024-07-15 20:26:12.724065] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:20.566 [2024-07-15 20:26:12.724070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:20.566 [2024-07-15 20:26:12.724092] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.136 20:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:21.136 20:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:21.136 20:26:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:21.136 20:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:21.136 20:26:13 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:21.136 20:26:13 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:21.136 20:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:21.397 [2024-07-15 20:26:13.531501] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:21.397 20:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:21.397 20:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:21.397 20:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:21.397 Malloc1 00:12:21.397 20:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:21.657 Malloc2 00:12:21.657 20:26:13 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:21.916 20:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:21.916 20:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:22.178 [2024-07-15 20:26:14.376105] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:22.178 20:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:22.178 20:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f9b43e04-3768-43b9-bbd5-4f5f7456c189 -a 10.0.0.2 -s 4420 -i 4 00:12:22.178 20:26:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:22.178 20:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:22.178 20:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:22.178 20:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:22.178 20:26:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:24.159 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.422 [ 0]:0x1 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2acafd5a9e6341149e9412693c45d4cc 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2acafd5a9e6341149e9412693c45d4cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:24.422 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.681 [ 0]:0x1 00:12:24.681 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:24.681 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.681 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2acafd5a9e6341149e9412693c45d4cc 00:12:24.681 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2acafd5a9e6341149e9412693c45d4cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.681 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:24.682 [ 1]:0x2 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2f73adf8cbf407283497778a33fb197 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2f73adf8cbf407283497778a33fb197 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:24.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:24.682 20:26:16 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:24.941 20:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f9b43e04-3768-43b9-bbd5-4f5f7456c189 -a 10.0.0.2 -s 4420 -i 4 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:12:25.200 20:26:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:27.108 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:27.369 [ 0]:0x2 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2f73adf8cbf407283497778a33fb197 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2f73adf8cbf407283497778a33fb197 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.369 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:27.629 [ 0]:0x1 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2acafd5a9e6341149e9412693c45d4cc 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2acafd5a9e6341149e9412693c45d4cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:27.629 [ 1]:0x2 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2f73adf8cbf407283497778a33fb197 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2f73adf8cbf407283497778a33fb197 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.629 20:26:19 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:27.890 [ 0]:0x2 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2f73adf8cbf407283497778a33fb197 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2f73adf8cbf407283497778a33fb197 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:27.890 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:28.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:28.152 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:28.152 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:28.152 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f9b43e04-3768-43b9-bbd5-4f5f7456c189 -a 10.0.0.2 -s 4420 -i 4 00:12:28.413 20:26:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:28.413 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:28.413 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.413 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:28.413 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:28.413 20:26:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.327 [ 0]:0x1 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=2acafd5a9e6341149e9412693c45d4cc 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 2acafd5a9e6341149e9412693c45d4cc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:30.327 [ 1]:0x2 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2f73adf8cbf407283497778a33fb197 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2f73adf8cbf407283497778a33fb197 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.327 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:30.588 [ 0]:0x2 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:30.588 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2f73adf8cbf407283497778a33fb197 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2f73adf8cbf407283497778a33fb197 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:30.849 20:26:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:30.849 [2024-07-15 20:26:23.137340] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:30.849 request: 00:12:30.849 { 00:12:30.849 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:30.849 "nsid": 2, 00:12:30.849 "host": "nqn.2016-06.io.spdk:host1", 00:12:30.849 "method": "nvmf_ns_remove_host", 00:12:30.849 "req_id": 1 00:12:30.849 } 00:12:30.849 Got JSON-RPC error response 00:12:30.849 response: 00:12:30.849 { 00:12:30.849 "code": -32602, 00:12:30.849 "message": "Invalid parameters" 00:12:30.849 } 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:30.849 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:31.109 [ 0]:0x2 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=c2f73adf8cbf407283497778a33fb197 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ c2f73adf8cbf407283497778a33fb197 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:31.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1230871 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1230871 /var/tmp/host.sock 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 1230871 ']' 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:31.109 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:31.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:31.110 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:31.110 20:26:23 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:31.370 [2024-07-15 20:26:23.526519] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:31.370 [2024-07-15 20:26:23.526572] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1230871 ] 00:12:31.370 EAL: No free 2048 kB hugepages reported on node 1 00:12:31.370 [2024-07-15 20:26:23.610456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.370 [2024-07-15 20:26:23.675685] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:31.941 20:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:31.941 20:26:24 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:12:31.941 20:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:32.203 20:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:32.203 20:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid e63b6330-da7f-4c64-83c8-a94998b0f3c3 00:12:32.203 20:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:32.203 20:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g E63B6330DA7F4C6483C8A94998B0F3C3 -i 00:12:32.465 20:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid a2a01e11-9c5f-4572-8914-0d54202f0bb3 00:12:32.465 20:26:24 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:32.465 20:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g A2A01E119C5F457289140D54202F0BB3 -i 00:12:32.727 20:26:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:32.727 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:32.988 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:32.988 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:33.249 nvme0n1 00:12:33.250 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:33.250 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:33.821 nvme1n2 00:12:33.821 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:33.821 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:33.821 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:33.821 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:33.821 20:26:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:33.821 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:33.821 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:33.821 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:33.821 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ e63b6330-da7f-4c64-83c8-a94998b0f3c3 == \e\6\3\b\6\3\3\0\-\d\a\7\f\-\4\c\6\4\-\8\3\c\8\-\a\9\4\9\9\8\b\0\f\3\c\3 ]] 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ a2a01e11-9c5f-4572-8914-0d54202f0bb3 == \a\2\a\0\1\e\1\1\-\9\c\5\f\-\4\5\7\2\-\8\9\1\4\-\0\d\5\4\2\0\2\f\0\b\b\3 ]] 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 1230871 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1230871 ']' 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1230871 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1230871 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1230871' 00:12:34.083 killing process with pid 1230871 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1230871 00:12:34.083 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1230871 00:12:34.345 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:34.606 rmmod nvme_tcp 00:12:34.606 rmmod nvme_fabrics 00:12:34.606 rmmod nvme_keyring 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 1228483 ']' 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 1228483 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 1228483 ']' 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 1228483 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1228483 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1228483' 00:12:34.606 killing process with pid 1228483 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 1228483 00:12:34.606 20:26:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 1228483 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:34.868 20:26:27 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.416 20:26:29 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:37.416 00:12:37.416 real 0m25.066s 00:12:37.416 user 0m24.269s 00:12:37.416 sys 0m8.053s 00:12:37.416 20:26:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:37.416 20:26:29 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:37.416 ************************************ 00:12:37.416 END TEST nvmf_ns_masking 00:12:37.416 ************************************ 00:12:37.416 20:26:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:37.416 20:26:29 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:12:37.416 20:26:29 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:37.416 20:26:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:37.416 20:26:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:37.416 20:26:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:37.416 ************************************ 00:12:37.416 START TEST nvmf_nvme_cli 00:12:37.416 ************************************ 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:37.416 * Looking for test storage... 00:12:37.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:37.416 20:26:29 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:37.417 20:26:29 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.562 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:45.562 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:45.562 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:45.562 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:45.562 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:45.562 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:45.562 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:12:45.563 Found 0000:31:00.0 (0x8086 - 0x159b) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:12:45.563 Found 0000:31:00.1 (0x8086 - 0x159b) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:12:45.563 Found net devices under 0000:31:00.0: cvl_0_0 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:12:45.563 Found net devices under 0000:31:00.1: cvl_0_1 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:45.563 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:45.563 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.642 ms 00:12:45.563 00:12:45.563 --- 10.0.0.2 ping statistics --- 00:12:45.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.563 rtt min/avg/max/mdev = 0.642/0.642/0.642/0.000 ms 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:45.563 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:45.563 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.311 ms 00:12:45.563 00:12:45.563 --- 10.0.0.1 ping statistics --- 00:12:45.563 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:45.563 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:45.563 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=1236259 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 1236259 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 1236259 ']' 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:45.564 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:45.564 20:26:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:45.564 [2024-07-15 20:26:37.605429] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:45.564 [2024-07-15 20:26:37.605493] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:45.564 EAL: No free 2048 kB hugepages reported on node 1 00:12:45.564 [2024-07-15 20:26:37.688744] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:45.564 [2024-07-15 20:26:37.765450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:45.564 [2024-07-15 20:26:37.765492] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:45.564 [2024-07-15 20:26:37.765500] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:45.564 [2024-07-15 20:26:37.765507] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:45.564 [2024-07-15 20:26:37.765513] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:45.564 [2024-07-15 20:26:37.765658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:45.564 [2024-07-15 20:26:37.765778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:45.564 [2024-07-15 20:26:37.765935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:45.564 [2024-07-15 20:26:37.765936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.134 [2024-07-15 20:26:38.438819] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:46.134 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 Malloc0 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 Malloc1 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.135 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.394 [2024-07-15 20:26:38.524527] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -a 10.0.0.2 -s 4420 00:12:46.394 00:12:46.394 Discovery Log Number of Records 2, Generation counter 2 00:12:46.394 =====Discovery Log Entry 0====== 00:12:46.394 trtype: tcp 00:12:46.394 adrfam: ipv4 00:12:46.394 subtype: current discovery subsystem 00:12:46.394 treq: not required 00:12:46.394 portid: 0 00:12:46.394 trsvcid: 4420 00:12:46.394 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:46.394 traddr: 10.0.0.2 00:12:46.394 eflags: explicit discovery connections, duplicate discovery information 00:12:46.394 sectype: none 00:12:46.394 =====Discovery Log Entry 1====== 00:12:46.394 trtype: tcp 00:12:46.394 adrfam: ipv4 00:12:46.394 subtype: nvme subsystem 00:12:46.394 treq: not required 00:12:46.394 portid: 0 00:12:46.394 trsvcid: 4420 00:12:46.394 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:46.394 traddr: 10.0.0.2 00:12:46.394 eflags: none 00:12:46.394 sectype: none 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:46.394 20:26:38 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.780 20:26:40 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:47.780 20:26:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.780 20:26:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.780 20:26:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:47.780 20:26:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:47.780 20:26:40 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.326 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:50.327 /dev/nvme0n1 ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:50.327 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:50.327 rmmod nvme_tcp 00:12:50.327 rmmod nvme_fabrics 00:12:50.327 rmmod nvme_keyring 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 1236259 ']' 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 1236259 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 1236259 ']' 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 1236259 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1236259 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1236259' 00:12:50.327 killing process with pid 1236259 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 1236259 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 1236259 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:50.327 20:26:42 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:52.871 20:26:44 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:52.871 00:12:52.871 real 0m15.438s 00:12:52.871 user 0m21.696s 00:12:52.871 sys 0m6.631s 00:12:52.871 20:26:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:52.871 20:26:44 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:52.871 ************************************ 00:12:52.871 END TEST nvmf_nvme_cli 00:12:52.871 ************************************ 00:12:52.871 20:26:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:52.871 20:26:44 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:12:52.871 20:26:44 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:52.871 20:26:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:52.871 20:26:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:52.871 20:26:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:52.871 ************************************ 00:12:52.871 START TEST nvmf_vfio_user 00:12:52.871 ************************************ 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:52.871 * Looking for test storage... 00:12:52.871 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1237850 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1237850' 00:12:52.871 Process pid: 1237850 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1237850 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1237850 ']' 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.871 20:26:44 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:52.871 [2024-07-15 20:26:44.967658] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:52.871 [2024-07-15 20:26:44.967732] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.871 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.871 [2024-07-15 20:26:45.039552] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.871 [2024-07-15 20:26:45.113843] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.871 [2024-07-15 20:26:45.113884] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.871 [2024-07-15 20:26:45.113892] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.871 [2024-07-15 20:26:45.113898] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.871 [2024-07-15 20:26:45.113904] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.871 [2024-07-15 20:26:45.114043] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.871 [2024-07-15 20:26:45.114167] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.871 [2024-07-15 20:26:45.114325] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.872 [2024-07-15 20:26:45.114482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:53.443 20:26:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:53.443 20:26:45 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:12:53.443 20:26:45 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:54.386 20:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:54.646 20:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:54.646 20:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:54.646 20:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:54.646 20:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:54.646 20:26:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:54.908 Malloc1 00:12:54.908 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:55.190 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:55.190 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:55.451 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.451 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:55.451 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:55.451 Malloc2 00:12:55.451 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:55.712 20:26:47 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:55.974 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:55.974 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:55.974 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:55.974 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:55.974 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:55.974 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:55.974 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:55.974 [2024-07-15 20:26:48.339160] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:12:55.974 [2024-07-15 20:26:48.339224] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1238537 ] 00:12:55.974 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.237 [2024-07-15 20:26:48.373868] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:56.237 [2024-07-15 20:26:48.377313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:56.237 [2024-07-15 20:26:48.377334] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fef51d3f000 00:12:56.237 [2024-07-15 20:26:48.378314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.379309] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.380319] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.381324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.382328] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.383333] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.384338] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.385343] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:56.237 [2024-07-15 20:26:48.386351] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:56.237 [2024-07-15 20:26:48.386361] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fef51d34000 00:12:56.237 [2024-07-15 20:26:48.387687] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:56.237 [2024-07-15 20:26:48.406395] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:56.237 [2024-07-15 20:26:48.406422] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:56.237 [2024-07-15 20:26:48.411482] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:56.237 [2024-07-15 20:26:48.411527] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:56.237 [2024-07-15 20:26:48.411612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:56.237 [2024-07-15 20:26:48.411630] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:56.237 [2024-07-15 20:26:48.411636] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:56.237 [2024-07-15 20:26:48.412483] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:56.237 [2024-07-15 20:26:48.412494] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:56.238 [2024-07-15 20:26:48.412501] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:56.238 [2024-07-15 20:26:48.413495] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:56.238 [2024-07-15 20:26:48.413504] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:56.238 [2024-07-15 20:26:48.413511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:56.238 [2024-07-15 20:26:48.414498] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:56.238 [2024-07-15 20:26:48.414507] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:56.238 [2024-07-15 20:26:48.415498] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:56.238 [2024-07-15 20:26:48.415506] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:56.238 [2024-07-15 20:26:48.415511] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:56.238 [2024-07-15 20:26:48.415517] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:56.238 [2024-07-15 20:26:48.415623] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:56.238 [2024-07-15 20:26:48.415627] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:56.238 [2024-07-15 20:26:48.415635] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:56.238 [2024-07-15 20:26:48.416503] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:56.238 [2024-07-15 20:26:48.417506] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:56.238 [2024-07-15 20:26:48.418513] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:56.238 [2024-07-15 20:26:48.419516] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:56.238 [2024-07-15 20:26:48.419571] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:56.238 [2024-07-15 20:26:48.420532] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:56.238 [2024-07-15 20:26:48.420539] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:56.238 [2024-07-15 20:26:48.420544] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420565] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:56.238 [2024-07-15 20:26:48.420573] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420588] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:56.238 [2024-07-15 20:26:48.420593] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.238 [2024-07-15 20:26:48.420606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.238 [2024-07-15 20:26:48.420643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:56.238 [2024-07-15 20:26:48.420654] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:56.238 [2024-07-15 20:26:48.420658] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:56.238 [2024-07-15 20:26:48.420663] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:56.238 [2024-07-15 20:26:48.420667] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:56.238 [2024-07-15 20:26:48.420672] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:56.238 [2024-07-15 20:26:48.420676] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:56.238 [2024-07-15 20:26:48.420681] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420688] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420698] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:56.238 [2024-07-15 20:26:48.420707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:56.238 [2024-07-15 20:26:48.420720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.238 [2024-07-15 20:26:48.420730] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.238 [2024-07-15 20:26:48.420739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.238 [2024-07-15 20:26:48.420747] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:56.238 [2024-07-15 20:26:48.420752] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420760] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420769] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:56.238 [2024-07-15 20:26:48.420776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:56.238 [2024-07-15 20:26:48.420782] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:56.238 [2024-07-15 20:26:48.420786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420793] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420799] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420807] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:56.238 [2024-07-15 20:26:48.420814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:56.238 [2024-07-15 20:26:48.420873] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420888] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:56.238 [2024-07-15 20:26:48.420893] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:56.238 [2024-07-15 20:26:48.420899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:56.238 [2024-07-15 20:26:48.420912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:56.238 [2024-07-15 20:26:48.420921] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:56.238 [2024-07-15 20:26:48.420931] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420939] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420946] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:56.238 [2024-07-15 20:26:48.420950] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.238 [2024-07-15 20:26:48.420956] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.238 [2024-07-15 20:26:48.420971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:56.238 [2024-07-15 20:26:48.420982] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.420996] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:56.238 [2024-07-15 20:26:48.421001] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.238 [2024-07-15 20:26:48.421007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.238 [2024-07-15 20:26:48.421020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:56.238 [2024-07-15 20:26:48.421027] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.421034] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.421044] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.421050] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.421055] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:56.238 [2024-07-15 20:26:48.421060] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:56.239 [2024-07-15 20:26:48.421065] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:56.239 [2024-07-15 20:26:48.421070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:56.239 [2024-07-15 20:26:48.421075] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:56.239 [2024-07-15 20:26:48.421092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:56.239 [2024-07-15 20:26:48.421101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:56.239 [2024-07-15 20:26:48.421112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:56.239 [2024-07-15 20:26:48.421124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:56.239 [2024-07-15 20:26:48.421135] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:56.239 [2024-07-15 20:26:48.421144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:56.239 [2024-07-15 20:26:48.421155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:56.239 [2024-07-15 20:26:48.421165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:56.239 [2024-07-15 20:26:48.421177] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:56.239 [2024-07-15 20:26:48.421184] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:56.239 [2024-07-15 20:26:48.421187] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:56.239 [2024-07-15 20:26:48.421191] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:56.239 [2024-07-15 20:26:48.421197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:56.239 [2024-07-15 20:26:48.421205] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:56.239 [2024-07-15 20:26:48.421209] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:56.239 [2024-07-15 20:26:48.421215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:56.239 [2024-07-15 20:26:48.421222] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:56.239 [2024-07-15 20:26:48.421226] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:56.239 [2024-07-15 20:26:48.421236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:56.239 [2024-07-15 20:26:48.421244] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:56.239 [2024-07-15 20:26:48.421248] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:56.239 [2024-07-15 20:26:48.421255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:56.239 [2024-07-15 20:26:48.421262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:56.239 [2024-07-15 20:26:48.421273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:56.239 [2024-07-15 20:26:48.421283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:56.239 [2024-07-15 20:26:48.421290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:56.239 ===================================================== 00:12:56.239 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:56.239 ===================================================== 00:12:56.239 Controller Capabilities/Features 00:12:56.239 ================================ 00:12:56.239 Vendor ID: 4e58 00:12:56.239 Subsystem Vendor ID: 4e58 00:12:56.239 Serial Number: SPDK1 00:12:56.239 Model Number: SPDK bdev Controller 00:12:56.239 Firmware Version: 24.09 00:12:56.239 Recommended Arb Burst: 6 00:12:56.239 IEEE OUI Identifier: 8d 6b 50 00:12:56.239 Multi-path I/O 00:12:56.239 May have multiple subsystem ports: Yes 00:12:56.239 May have multiple controllers: Yes 00:12:56.239 Associated with SR-IOV VF: No 00:12:56.239 Max Data Transfer Size: 131072 00:12:56.239 Max Number of Namespaces: 32 00:12:56.239 Max Number of I/O Queues: 127 00:12:56.239 NVMe Specification Version (VS): 1.3 00:12:56.239 NVMe Specification Version (Identify): 1.3 00:12:56.239 Maximum Queue Entries: 256 00:12:56.239 Contiguous Queues Required: Yes 00:12:56.239 Arbitration Mechanisms Supported 00:12:56.239 Weighted Round Robin: Not Supported 00:12:56.239 Vendor Specific: Not Supported 00:12:56.239 Reset Timeout: 15000 ms 00:12:56.239 Doorbell Stride: 4 bytes 00:12:56.239 NVM Subsystem Reset: Not Supported 00:12:56.239 Command Sets Supported 00:12:56.239 NVM Command Set: Supported 00:12:56.239 Boot Partition: Not Supported 00:12:56.239 Memory Page Size Minimum: 4096 bytes 00:12:56.239 Memory Page Size Maximum: 4096 bytes 00:12:56.239 Persistent Memory Region: Not Supported 00:12:56.239 Optional Asynchronous Events Supported 00:12:56.239 Namespace Attribute Notices: Supported 00:12:56.239 Firmware Activation Notices: Not Supported 00:12:56.239 ANA Change Notices: Not Supported 00:12:56.239 PLE Aggregate Log Change Notices: Not Supported 00:12:56.239 LBA Status Info Alert Notices: Not Supported 00:12:56.239 EGE Aggregate Log Change Notices: Not Supported 00:12:56.239 Normal NVM Subsystem Shutdown event: Not Supported 00:12:56.239 Zone Descriptor Change Notices: Not Supported 00:12:56.239 Discovery Log Change Notices: Not Supported 00:12:56.239 Controller Attributes 00:12:56.239 128-bit Host Identifier: Supported 00:12:56.239 Non-Operational Permissive Mode: Not Supported 00:12:56.239 NVM Sets: Not Supported 00:12:56.239 Read Recovery Levels: Not Supported 00:12:56.239 Endurance Groups: Not Supported 00:12:56.239 Predictable Latency Mode: Not Supported 00:12:56.239 Traffic Based Keep ALive: Not Supported 00:12:56.239 Namespace Granularity: Not Supported 00:12:56.239 SQ Associations: Not Supported 00:12:56.239 UUID List: Not Supported 00:12:56.239 Multi-Domain Subsystem: Not Supported 00:12:56.239 Fixed Capacity Management: Not Supported 00:12:56.239 Variable Capacity Management: Not Supported 00:12:56.239 Delete Endurance Group: Not Supported 00:12:56.239 Delete NVM Set: Not Supported 00:12:56.239 Extended LBA Formats Supported: Not Supported 00:12:56.239 Flexible Data Placement Supported: Not Supported 00:12:56.239 00:12:56.239 Controller Memory Buffer Support 00:12:56.239 ================================ 00:12:56.239 Supported: No 00:12:56.239 00:12:56.239 Persistent Memory Region Support 00:12:56.239 ================================ 00:12:56.239 Supported: No 00:12:56.239 00:12:56.239 Admin Command Set Attributes 00:12:56.239 ============================ 00:12:56.239 Security Send/Receive: Not Supported 00:12:56.239 Format NVM: Not Supported 00:12:56.239 Firmware Activate/Download: Not Supported 00:12:56.239 Namespace Management: Not Supported 00:12:56.239 Device Self-Test: Not Supported 00:12:56.239 Directives: Not Supported 00:12:56.239 NVMe-MI: Not Supported 00:12:56.239 Virtualization Management: Not Supported 00:12:56.239 Doorbell Buffer Config: Not Supported 00:12:56.239 Get LBA Status Capability: Not Supported 00:12:56.239 Command & Feature Lockdown Capability: Not Supported 00:12:56.239 Abort Command Limit: 4 00:12:56.239 Async Event Request Limit: 4 00:12:56.239 Number of Firmware Slots: N/A 00:12:56.239 Firmware Slot 1 Read-Only: N/A 00:12:56.239 Firmware Activation Without Reset: N/A 00:12:56.239 Multiple Update Detection Support: N/A 00:12:56.239 Firmware Update Granularity: No Information Provided 00:12:56.239 Per-Namespace SMART Log: No 00:12:56.239 Asymmetric Namespace Access Log Page: Not Supported 00:12:56.239 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:56.239 Command Effects Log Page: Supported 00:12:56.239 Get Log Page Extended Data: Supported 00:12:56.239 Telemetry Log Pages: Not Supported 00:12:56.239 Persistent Event Log Pages: Not Supported 00:12:56.239 Supported Log Pages Log Page: May Support 00:12:56.239 Commands Supported & Effects Log Page: Not Supported 00:12:56.239 Feature Identifiers & Effects Log Page:May Support 00:12:56.239 NVMe-MI Commands & Effects Log Page: May Support 00:12:56.239 Data Area 4 for Telemetry Log: Not Supported 00:12:56.239 Error Log Page Entries Supported: 128 00:12:56.239 Keep Alive: Supported 00:12:56.239 Keep Alive Granularity: 10000 ms 00:12:56.239 00:12:56.239 NVM Command Set Attributes 00:12:56.239 ========================== 00:12:56.239 Submission Queue Entry Size 00:12:56.239 Max: 64 00:12:56.239 Min: 64 00:12:56.239 Completion Queue Entry Size 00:12:56.239 Max: 16 00:12:56.239 Min: 16 00:12:56.239 Number of Namespaces: 32 00:12:56.239 Compare Command: Supported 00:12:56.239 Write Uncorrectable Command: Not Supported 00:12:56.239 Dataset Management Command: Supported 00:12:56.239 Write Zeroes Command: Supported 00:12:56.239 Set Features Save Field: Not Supported 00:12:56.239 Reservations: Not Supported 00:12:56.239 Timestamp: Not Supported 00:12:56.239 Copy: Supported 00:12:56.239 Volatile Write Cache: Present 00:12:56.239 Atomic Write Unit (Normal): 1 00:12:56.240 Atomic Write Unit (PFail): 1 00:12:56.240 Atomic Compare & Write Unit: 1 00:12:56.240 Fused Compare & Write: Supported 00:12:56.240 Scatter-Gather List 00:12:56.240 SGL Command Set: Supported (Dword aligned) 00:12:56.240 SGL Keyed: Not Supported 00:12:56.240 SGL Bit Bucket Descriptor: Not Supported 00:12:56.240 SGL Metadata Pointer: Not Supported 00:12:56.240 Oversized SGL: Not Supported 00:12:56.240 SGL Metadata Address: Not Supported 00:12:56.240 SGL Offset: Not Supported 00:12:56.240 Transport SGL Data Block: Not Supported 00:12:56.240 Replay Protected Memory Block: Not Supported 00:12:56.240 00:12:56.240 Firmware Slot Information 00:12:56.240 ========================= 00:12:56.240 Active slot: 1 00:12:56.240 Slot 1 Firmware Revision: 24.09 00:12:56.240 00:12:56.240 00:12:56.240 Commands Supported and Effects 00:12:56.240 ============================== 00:12:56.240 Admin Commands 00:12:56.240 -------------- 00:12:56.240 Get Log Page (02h): Supported 00:12:56.240 Identify (06h): Supported 00:12:56.240 Abort (08h): Supported 00:12:56.240 Set Features (09h): Supported 00:12:56.240 Get Features (0Ah): Supported 00:12:56.240 Asynchronous Event Request (0Ch): Supported 00:12:56.240 Keep Alive (18h): Supported 00:12:56.240 I/O Commands 00:12:56.240 ------------ 00:12:56.240 Flush (00h): Supported LBA-Change 00:12:56.240 Write (01h): Supported LBA-Change 00:12:56.240 Read (02h): Supported 00:12:56.240 Compare (05h): Supported 00:12:56.240 Write Zeroes (08h): Supported LBA-Change 00:12:56.240 Dataset Management (09h): Supported LBA-Change 00:12:56.240 Copy (19h): Supported LBA-Change 00:12:56.240 00:12:56.240 Error Log 00:12:56.240 ========= 00:12:56.240 00:12:56.240 Arbitration 00:12:56.240 =========== 00:12:56.240 Arbitration Burst: 1 00:12:56.240 00:12:56.240 Power Management 00:12:56.240 ================ 00:12:56.240 Number of Power States: 1 00:12:56.240 Current Power State: Power State #0 00:12:56.240 Power State #0: 00:12:56.240 Max Power: 0.00 W 00:12:56.240 Non-Operational State: Operational 00:12:56.240 Entry Latency: Not Reported 00:12:56.240 Exit Latency: Not Reported 00:12:56.240 Relative Read Throughput: 0 00:12:56.240 Relative Read Latency: 0 00:12:56.240 Relative Write Throughput: 0 00:12:56.240 Relative Write Latency: 0 00:12:56.240 Idle Power: Not Reported 00:12:56.240 Active Power: Not Reported 00:12:56.240 Non-Operational Permissive Mode: Not Supported 00:12:56.240 00:12:56.240 Health Information 00:12:56.240 ================== 00:12:56.240 Critical Warnings: 00:12:56.240 Available Spare Space: OK 00:12:56.240 Temperature: OK 00:12:56.240 Device Reliability: OK 00:12:56.240 Read Only: No 00:12:56.240 Volatile Memory Backup: OK 00:12:56.240 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:56.240 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:56.240 Available Spare: 0% 00:12:56.240 Available Sp[2024-07-15 20:26:48.421391] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:56.240 [2024-07-15 20:26:48.421401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:56.240 [2024-07-15 20:26:48.421430] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:56.240 [2024-07-15 20:26:48.421438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.240 [2024-07-15 20:26:48.421445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.240 [2024-07-15 20:26:48.421451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.240 [2024-07-15 20:26:48.421457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:56.240 [2024-07-15 20:26:48.421539] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:56.240 [2024-07-15 20:26:48.421548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:56.240 [2024-07-15 20:26:48.422542] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:56.240 [2024-07-15 20:26:48.422583] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:56.240 [2024-07-15 20:26:48.422592] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:56.240 [2024-07-15 20:26:48.423552] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:56.240 [2024-07-15 20:26:48.423563] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:56.240 [2024-07-15 20:26:48.423625] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:56.240 [2024-07-15 20:26:48.425572] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:56.240 are Threshold: 0% 00:12:56.240 Life Percentage Used: 0% 00:12:56.240 Data Units Read: 0 00:12:56.240 Data Units Written: 0 00:12:56.240 Host Read Commands: 0 00:12:56.240 Host Write Commands: 0 00:12:56.240 Controller Busy Time: 0 minutes 00:12:56.240 Power Cycles: 0 00:12:56.240 Power On Hours: 0 hours 00:12:56.240 Unsafe Shutdowns: 0 00:12:56.240 Unrecoverable Media Errors: 0 00:12:56.240 Lifetime Error Log Entries: 0 00:12:56.240 Warning Temperature Time: 0 minutes 00:12:56.240 Critical Temperature Time: 0 minutes 00:12:56.240 00:12:56.240 Number of Queues 00:12:56.240 ================ 00:12:56.240 Number of I/O Submission Queues: 127 00:12:56.240 Number of I/O Completion Queues: 127 00:12:56.240 00:12:56.240 Active Namespaces 00:12:56.240 ================= 00:12:56.240 Namespace ID:1 00:12:56.240 Error Recovery Timeout: Unlimited 00:12:56.240 Command Set Identifier: NVM (00h) 00:12:56.240 Deallocate: Supported 00:12:56.240 Deallocated/Unwritten Error: Not Supported 00:12:56.240 Deallocated Read Value: Unknown 00:12:56.240 Deallocate in Write Zeroes: Not Supported 00:12:56.240 Deallocated Guard Field: 0xFFFF 00:12:56.240 Flush: Supported 00:12:56.240 Reservation: Supported 00:12:56.240 Namespace Sharing Capabilities: Multiple Controllers 00:12:56.240 Size (in LBAs): 131072 (0GiB) 00:12:56.240 Capacity (in LBAs): 131072 (0GiB) 00:12:56.240 Utilization (in LBAs): 131072 (0GiB) 00:12:56.240 NGUID: 88A765369D2A4256B8D84771710F8FA0 00:12:56.240 UUID: 88a76536-9d2a-4256-b8d8-4771710f8fa0 00:12:56.240 Thin Provisioning: Not Supported 00:12:56.240 Per-NS Atomic Units: Yes 00:12:56.240 Atomic Boundary Size (Normal): 0 00:12:56.240 Atomic Boundary Size (PFail): 0 00:12:56.240 Atomic Boundary Offset: 0 00:12:56.240 Maximum Single Source Range Length: 65535 00:12:56.240 Maximum Copy Length: 65535 00:12:56.240 Maximum Source Range Count: 1 00:12:56.240 NGUID/EUI64 Never Reused: No 00:12:56.240 Namespace Write Protected: No 00:12:56.240 Number of LBA Formats: 1 00:12:56.240 Current LBA Format: LBA Format #00 00:12:56.240 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:56.240 00:12:56.240 20:26:48 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:56.240 EAL: No free 2048 kB hugepages reported on node 1 00:12:56.240 [2024-07-15 20:26:48.609885] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:01.593 Initializing NVMe Controllers 00:13:01.593 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:01.593 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:01.593 Initialization complete. Launching workers. 00:13:01.593 ======================================================== 00:13:01.593 Latency(us) 00:13:01.593 Device Information : IOPS MiB/s Average min max 00:13:01.593 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39961.60 156.10 3205.81 834.01 7670.13 00:13:01.593 ======================================================== 00:13:01.593 Total : 39961.60 156.10 3205.81 834.01 7670.13 00:13:01.593 00:13:01.593 [2024-07-15 20:26:53.630893] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:01.593 20:26:53 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:01.593 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.593 [2024-07-15 20:26:53.808729] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:06.880 Initializing NVMe Controllers 00:13:06.880 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:06.880 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:06.880 Initialization complete. Launching workers. 00:13:06.880 ======================================================== 00:13:06.880 Latency(us) 00:13:06.880 Device Information : IOPS MiB/s Average min max 00:13:06.880 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16053.16 62.71 7979.06 5983.68 8979.51 00:13:06.880 ======================================================== 00:13:06.880 Total : 16053.16 62.71 7979.06 5983.68 8979.51 00:13:06.880 00:13:06.880 [2024-07-15 20:26:58.850241] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:06.880 20:26:58 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:06.880 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.880 [2024-07-15 20:26:59.044106] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:12.169 [2024-07-15 20:27:04.117468] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:12.169 Initializing NVMe Controllers 00:13:12.169 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:12.169 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:12.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:12.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:12.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:12.169 Initialization complete. Launching workers. 00:13:12.169 Starting thread on core 2 00:13:12.169 Starting thread on core 3 00:13:12.169 Starting thread on core 1 00:13:12.169 20:27:04 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:12.169 EAL: No free 2048 kB hugepages reported on node 1 00:13:12.169 [2024-07-15 20:27:04.387605] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.476 [2024-07-15 20:27:07.447923] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.476 Initializing NVMe Controllers 00:13:15.476 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.476 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.476 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:15.476 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:15.477 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:15.477 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:15.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:15.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:15.477 Initialization complete. Launching workers. 00:13:15.477 Starting thread on core 1 with urgent priority queue 00:13:15.477 Starting thread on core 2 with urgent priority queue 00:13:15.477 Starting thread on core 3 with urgent priority queue 00:13:15.477 Starting thread on core 0 with urgent priority queue 00:13:15.477 SPDK bdev Controller (SPDK1 ) core 0: 8115.33 IO/s 12.32 secs/100000 ios 00:13:15.477 SPDK bdev Controller (SPDK1 ) core 1: 8325.67 IO/s 12.01 secs/100000 ios 00:13:15.477 SPDK bdev Controller (SPDK1 ) core 2: 10775.00 IO/s 9.28 secs/100000 ios 00:13:15.477 SPDK bdev Controller (SPDK1 ) core 3: 12077.67 IO/s 8.28 secs/100000 ios 00:13:15.477 ======================================================== 00:13:15.477 00:13:15.477 20:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:15.477 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.477 [2024-07-15 20:27:07.721114] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:15.477 Initializing NVMe Controllers 00:13:15.477 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.477 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:15.477 Namespace ID: 1 size: 0GB 00:13:15.477 Initialization complete. 00:13:15.477 INFO: using host memory buffer for IO 00:13:15.477 Hello world! 00:13:15.477 [2024-07-15 20:27:07.756307] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:15.477 20:27:07 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:15.738 EAL: No free 2048 kB hugepages reported on node 1 00:13:15.738 [2024-07-15 20:27:08.019061] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:16.681 Initializing NVMe Controllers 00:13:16.681 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:16.681 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:16.681 Initialization complete. Launching workers. 00:13:16.681 submit (in ns) avg, min, max = 7292.4, 3892.5, 6992426.7 00:13:16.681 complete (in ns) avg, min, max = 16663.6, 2404.2, 5992510.8 00:13:16.681 00:13:16.681 Submit histogram 00:13:16.681 ================ 00:13:16.681 Range in us Cumulative Count 00:13:16.681 3.867 - 3.893: 0.0052% ( 1) 00:13:16.681 3.893 - 3.920: 0.8215% ( 157) 00:13:16.681 3.920 - 3.947: 3.1664% ( 451) 00:13:16.681 3.947 - 3.973: 10.2948% ( 1371) 00:13:16.681 3.973 - 4.000: 22.2742% ( 2304) 00:13:16.681 4.000 - 4.027: 34.2640% ( 2306) 00:13:16.681 4.027 - 4.053: 47.1013% ( 2469) 00:13:16.681 4.053 - 4.080: 64.2853% ( 3305) 00:13:16.681 4.080 - 4.107: 78.7761% ( 2787) 00:13:16.681 4.107 - 4.133: 89.1489% ( 1995) 00:13:16.681 4.133 - 4.160: 94.9618% ( 1118) 00:13:16.681 4.160 - 4.187: 97.7019% ( 527) 00:13:16.681 4.187 - 4.213: 98.8821% ( 227) 00:13:16.681 4.213 - 4.240: 99.3241% ( 85) 00:13:16.681 4.240 - 4.267: 99.4333% ( 21) 00:13:16.681 4.267 - 4.293: 99.4801% ( 9) 00:13:16.682 4.293 - 4.320: 99.5009% ( 4) 00:13:16.682 4.320 - 4.347: 99.5061% ( 1) 00:13:16.682 4.560 - 4.587: 99.5113% ( 1) 00:13:16.682 4.640 - 4.667: 99.5165% ( 1) 00:13:16.682 4.747 - 4.773: 99.5217% ( 1) 00:13:16.682 4.773 - 4.800: 99.5269% ( 1) 00:13:16.682 4.800 - 4.827: 99.5321% ( 1) 00:13:16.682 4.853 - 4.880: 99.5373% ( 1) 00:13:16.682 4.880 - 4.907: 99.5425% ( 1) 00:13:16.682 4.907 - 4.933: 99.5477% ( 1) 00:13:16.682 5.120 - 5.147: 99.5529% ( 1) 00:13:16.682 5.173 - 5.200: 99.5581% ( 1) 00:13:16.682 5.307 - 5.333: 99.5633% ( 1) 00:13:16.682 5.600 - 5.627: 99.5685% ( 1) 00:13:16.682 5.707 - 5.733: 99.5736% ( 1) 00:13:16.682 6.080 - 6.107: 99.5788% ( 1) 00:13:16.682 6.133 - 6.160: 99.5840% ( 1) 00:13:16.682 6.160 - 6.187: 99.5892% ( 1) 00:13:16.682 6.187 - 6.213: 99.5944% ( 1) 00:13:16.682 6.240 - 6.267: 99.6048% ( 2) 00:13:16.682 6.400 - 6.427: 99.6100% ( 1) 00:13:16.682 6.453 - 6.480: 99.6152% ( 1) 00:13:16.682 6.480 - 6.507: 99.6256% ( 2) 00:13:16.682 6.693 - 6.720: 99.6308% ( 1) 00:13:16.682 6.720 - 6.747: 99.6412% ( 2) 00:13:16.682 6.747 - 6.773: 99.6516% ( 2) 00:13:16.682 6.773 - 6.800: 99.6568% ( 1) 00:13:16.682 6.800 - 6.827: 99.6672% ( 2) 00:13:16.682 6.827 - 6.880: 99.6776% ( 2) 00:13:16.682 6.880 - 6.933: 99.6932% ( 3) 00:13:16.682 6.933 - 6.987: 99.7036% ( 2) 00:13:16.682 6.987 - 7.040: 99.7140% ( 2) 00:13:16.682 7.040 - 7.093: 99.7296% ( 3) 00:13:16.682 7.093 - 7.147: 99.7452% ( 3) 00:13:16.682 7.147 - 7.200: 99.7556% ( 2) 00:13:16.682 7.200 - 7.253: 99.7712% ( 3) 00:13:16.682 7.307 - 7.360: 99.7868% ( 3) 00:13:16.682 7.413 - 7.467: 99.7920% ( 1) 00:13:16.682 7.467 - 7.520: 99.8180% ( 5) 00:13:16.682 7.573 - 7.627: 99.8336% ( 3) 00:13:16.682 7.787 - 7.840: 99.8440% ( 2) 00:13:16.682 7.840 - 7.893: 99.8544% ( 2) 00:13:16.682 8.053 - 8.107: 99.8596% ( 1) 00:13:16.682 8.107 - 8.160: 99.8648% ( 1) 00:13:16.682 8.267 - 8.320: 99.8700% ( 1) 00:13:16.682 8.427 - 8.480: 99.8752% ( 1) 00:13:16.682 8.480 - 8.533: 99.8804% ( 1) 00:13:16.682 8.533 - 8.587: 99.8856% ( 1) 00:13:16.682 8.907 - 8.960: 99.8908% ( 1) 00:13:16.682 12.320 - 12.373: 99.8960% ( 1) 00:13:16.682 15.040 - 15.147: 99.9012% ( 1) 00:13:16.682 15.680 - 15.787: 99.9116% ( 2) 00:13:16.682 16.747 - 16.853: 99.9168% ( 1) 00:13:16.682 1037.653 - 1044.480: 99.9220% ( 1) 00:13:16.682 2020.693 - 2034.347: 99.9324% ( 2) 00:13:16.682 3986.773 - 4014.080: 99.9896% ( 11) 00:13:16.682 5980.160 - 6007.467: 99.9948% ( 1) 00:13:16.682 6990.507 - 7045.120: 100.0000% ( 1) 00:13:16.682 00:13:16.682 Complete histogram 00:13:16.682 ================== 00:13:16.682 Range in us Cumulative Count 00:13:16.682 2.400 - 2.413: 0.7071% ( 136) 00:13:16.682 2.413 - 2.427: 1.0815% ( 72) 00:13:16.682 2.427 - 2.440: 1.1647% ( 16) 00:13:16.682 2.440 - [2024-07-15 20:27:09.038605] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:16.943 2.453: 1.2687% ( 20) 00:13:16.943 2.453 - 2.467: 1.3206% ( 10) 00:13:16.943 2.467 - 2.480: 4.7782% ( 665) 00:13:16.943 2.480 - 2.493: 45.2140% ( 7777) 00:13:16.943 2.493 - 2.507: 60.9265% ( 3022) 00:13:16.943 2.507 - 2.520: 72.9891% ( 2320) 00:13:16.943 2.520 - 2.533: 80.0759% ( 1363) 00:13:16.943 2.533 - 2.547: 81.9685% ( 364) 00:13:16.943 2.547 - 2.560: 85.8316% ( 743) 00:13:16.943 2.560 - 2.573: 91.5666% ( 1103) 00:13:16.943 2.573 - 2.587: 95.3621% ( 730) 00:13:16.943 2.587 - 2.600: 97.6135% ( 433) 00:13:16.943 2.600 - 2.613: 98.8717% ( 242) 00:13:16.943 2.613 - 2.627: 99.3293% ( 88) 00:13:16.943 2.627 - 2.640: 99.4021% ( 14) 00:13:16.943 2.640 - 2.653: 99.4073% ( 1) 00:13:16.943 3.360 - 3.373: 99.4125% ( 1) 00:13:16.943 4.480 - 4.507: 99.4177% ( 1) 00:13:16.943 4.693 - 4.720: 99.4229% ( 1) 00:13:16.943 4.747 - 4.773: 99.4281% ( 1) 00:13:16.943 4.773 - 4.800: 99.4333% ( 1) 00:13:16.943 4.880 - 4.907: 99.4385% ( 1) 00:13:16.943 4.907 - 4.933: 99.4437% ( 1) 00:13:16.943 4.933 - 4.960: 99.4489% ( 1) 00:13:16.943 4.987 - 5.013: 99.4541% ( 1) 00:13:16.943 5.093 - 5.120: 99.4593% ( 1) 00:13:16.943 5.147 - 5.173: 99.4645% ( 1) 00:13:16.943 5.173 - 5.200: 99.4697% ( 1) 00:13:16.943 5.200 - 5.227: 99.4801% ( 2) 00:13:16.943 5.280 - 5.307: 99.4905% ( 2) 00:13:16.943 5.307 - 5.333: 99.4957% ( 1) 00:13:16.943 5.333 - 5.360: 99.5009% ( 1) 00:13:16.943 5.360 - 5.387: 99.5061% ( 1) 00:13:16.943 5.387 - 5.413: 99.5113% ( 1) 00:13:16.943 5.413 - 5.440: 99.5165% ( 1) 00:13:16.943 5.573 - 5.600: 99.5217% ( 1) 00:13:16.943 5.627 - 5.653: 99.5269% ( 1) 00:13:16.943 5.653 - 5.680: 99.5321% ( 1) 00:13:16.943 5.680 - 5.707: 99.5373% ( 1) 00:13:16.943 5.733 - 5.760: 99.5477% ( 2) 00:13:16.943 5.893 - 5.920: 99.5529% ( 1) 00:13:16.943 5.947 - 5.973: 99.5685% ( 3) 00:13:16.943 6.053 - 6.080: 99.5736% ( 1) 00:13:16.943 6.080 - 6.107: 99.5840% ( 2) 00:13:16.943 6.107 - 6.133: 99.5944% ( 2) 00:13:16.943 6.240 - 6.267: 99.6048% ( 2) 00:13:16.943 6.293 - 6.320: 99.6152% ( 2) 00:13:16.943 6.320 - 6.347: 99.6204% ( 1) 00:13:16.943 6.613 - 6.640: 99.6256% ( 1) 00:13:16.943 7.040 - 7.093: 99.6308% ( 1) 00:13:16.943 13.013 - 13.067: 99.6360% ( 1) 00:13:16.943 45.227 - 45.440: 99.6412% ( 1) 00:13:16.943 1037.653 - 1044.480: 99.6464% ( 1) 00:13:16.943 1269.760 - 1276.587: 99.6516% ( 1) 00:13:16.943 2034.347 - 2048.000: 99.6568% ( 1) 00:13:16.943 3986.773 - 4014.080: 99.9844% ( 63) 00:13:16.943 4014.080 - 4041.387: 99.9896% ( 1) 00:13:16.943 5980.160 - 6007.467: 100.0000% ( 2) 00:13:16.943 00:13:16.943 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:16.943 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:16.943 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:16.943 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:16.943 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:16.943 [ 00:13:16.943 { 00:13:16.943 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:16.943 "subtype": "Discovery", 00:13:16.943 "listen_addresses": [], 00:13:16.943 "allow_any_host": true, 00:13:16.943 "hosts": [] 00:13:16.943 }, 00:13:16.943 { 00:13:16.943 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:16.943 "subtype": "NVMe", 00:13:16.943 "listen_addresses": [ 00:13:16.943 { 00:13:16.943 "trtype": "VFIOUSER", 00:13:16.943 "adrfam": "IPv4", 00:13:16.943 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:16.943 "trsvcid": "0" 00:13:16.943 } 00:13:16.943 ], 00:13:16.943 "allow_any_host": true, 00:13:16.943 "hosts": [], 00:13:16.943 "serial_number": "SPDK1", 00:13:16.943 "model_number": "SPDK bdev Controller", 00:13:16.943 "max_namespaces": 32, 00:13:16.943 "min_cntlid": 1, 00:13:16.943 "max_cntlid": 65519, 00:13:16.943 "namespaces": [ 00:13:16.943 { 00:13:16.943 "nsid": 1, 00:13:16.943 "bdev_name": "Malloc1", 00:13:16.943 "name": "Malloc1", 00:13:16.943 "nguid": "88A765369D2A4256B8D84771710F8FA0", 00:13:16.943 "uuid": "88a76536-9d2a-4256-b8d8-4771710f8fa0" 00:13:16.943 } 00:13:16.944 ] 00:13:16.944 }, 00:13:16.944 { 00:13:16.944 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:16.944 "subtype": "NVMe", 00:13:16.944 "listen_addresses": [ 00:13:16.944 { 00:13:16.944 "trtype": "VFIOUSER", 00:13:16.944 "adrfam": "IPv4", 00:13:16.944 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:16.944 "trsvcid": "0" 00:13:16.944 } 00:13:16.944 ], 00:13:16.944 "allow_any_host": true, 00:13:16.944 "hosts": [], 00:13:16.944 "serial_number": "SPDK2", 00:13:16.944 "model_number": "SPDK bdev Controller", 00:13:16.944 "max_namespaces": 32, 00:13:16.944 "min_cntlid": 1, 00:13:16.944 "max_cntlid": 65519, 00:13:16.944 "namespaces": [ 00:13:16.944 { 00:13:16.944 "nsid": 1, 00:13:16.944 "bdev_name": "Malloc2", 00:13:16.944 "name": "Malloc2", 00:13:16.944 "nguid": "448FEA17E58E4641BA2474A639C069D4", 00:13:16.944 "uuid": "448fea17-e58e-4641-ba24-74a639c069d4" 00:13:16.944 } 00:13:16.944 ] 00:13:16.944 } 00:13:16.944 ] 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1242981 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:16.944 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:16.944 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.204 Malloc3 00:13:17.204 [2024-07-15 20:27:09.427673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:17.204 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:17.466 [2024-07-15 20:27:09.597801] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:17.466 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:17.466 Asynchronous Event Request test 00:13:17.466 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:17.466 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:17.466 Registering asynchronous event callbacks... 00:13:17.466 Starting namespace attribute notice tests for all controllers... 00:13:17.466 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:17.466 aer_cb - Changed Namespace 00:13:17.466 Cleaning up... 00:13:17.466 [ 00:13:17.466 { 00:13:17.466 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:17.466 "subtype": "Discovery", 00:13:17.466 "listen_addresses": [], 00:13:17.466 "allow_any_host": true, 00:13:17.466 "hosts": [] 00:13:17.466 }, 00:13:17.466 { 00:13:17.466 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:17.466 "subtype": "NVMe", 00:13:17.466 "listen_addresses": [ 00:13:17.466 { 00:13:17.466 "trtype": "VFIOUSER", 00:13:17.466 "adrfam": "IPv4", 00:13:17.466 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:17.466 "trsvcid": "0" 00:13:17.466 } 00:13:17.466 ], 00:13:17.466 "allow_any_host": true, 00:13:17.466 "hosts": [], 00:13:17.466 "serial_number": "SPDK1", 00:13:17.466 "model_number": "SPDK bdev Controller", 00:13:17.466 "max_namespaces": 32, 00:13:17.466 "min_cntlid": 1, 00:13:17.466 "max_cntlid": 65519, 00:13:17.466 "namespaces": [ 00:13:17.466 { 00:13:17.466 "nsid": 1, 00:13:17.466 "bdev_name": "Malloc1", 00:13:17.466 "name": "Malloc1", 00:13:17.466 "nguid": "88A765369D2A4256B8D84771710F8FA0", 00:13:17.466 "uuid": "88a76536-9d2a-4256-b8d8-4771710f8fa0" 00:13:17.466 }, 00:13:17.466 { 00:13:17.466 "nsid": 2, 00:13:17.466 "bdev_name": "Malloc3", 00:13:17.466 "name": "Malloc3", 00:13:17.466 "nguid": "6FDD27A6CE804A31918F4F5800F866BB", 00:13:17.466 "uuid": "6fdd27a6-ce80-4a31-918f-4f5800f866bb" 00:13:17.466 } 00:13:17.466 ] 00:13:17.466 }, 00:13:17.466 { 00:13:17.466 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:17.466 "subtype": "NVMe", 00:13:17.466 "listen_addresses": [ 00:13:17.466 { 00:13:17.466 "trtype": "VFIOUSER", 00:13:17.466 "adrfam": "IPv4", 00:13:17.466 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:17.466 "trsvcid": "0" 00:13:17.466 } 00:13:17.466 ], 00:13:17.466 "allow_any_host": true, 00:13:17.466 "hosts": [], 00:13:17.466 "serial_number": "SPDK2", 00:13:17.466 "model_number": "SPDK bdev Controller", 00:13:17.466 "max_namespaces": 32, 00:13:17.466 "min_cntlid": 1, 00:13:17.466 "max_cntlid": 65519, 00:13:17.466 "namespaces": [ 00:13:17.466 { 00:13:17.466 "nsid": 1, 00:13:17.466 "bdev_name": "Malloc2", 00:13:17.466 "name": "Malloc2", 00:13:17.466 "nguid": "448FEA17E58E4641BA2474A639C069D4", 00:13:17.466 "uuid": "448fea17-e58e-4641-ba24-74a639c069d4" 00:13:17.466 } 00:13:17.466 ] 00:13:17.466 } 00:13:17.466 ] 00:13:17.466 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1242981 00:13:17.466 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:17.466 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:17.466 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:17.466 20:27:09 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:17.466 [2024-07-15 20:27:09.819560] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:13:17.466 [2024-07-15 20:27:09.819635] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1243066 ] 00:13:17.466 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.730 [2024-07-15 20:27:09.850780] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:17.730 [2024-07-15 20:27:09.859453] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.730 [2024-07-15 20:27:09.859474] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fc231421000 00:13:17.730 [2024-07-15 20:27:09.860449] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.861454] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.862458] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.863464] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.864469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.865473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.866480] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.867489] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:17.730 [2024-07-15 20:27:09.868497] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:17.730 [2024-07-15 20:27:09.868507] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fc231416000 00:13:17.730 [2024-07-15 20:27:09.869832] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:17.730 [2024-07-15 20:27:09.886039] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:17.730 [2024-07-15 20:27:09.886064] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:13:17.730 [2024-07-15 20:27:09.891138] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:17.730 [2024-07-15 20:27:09.891184] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:17.730 [2024-07-15 20:27:09.891269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:13:17.730 [2024-07-15 20:27:09.891283] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:13:17.730 [2024-07-15 20:27:09.891289] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:13:17.730 [2024-07-15 20:27:09.892137] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:17.730 [2024-07-15 20:27:09.892150] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:13:17.730 [2024-07-15 20:27:09.892157] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:13:17.730 [2024-07-15 20:27:09.893143] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:17.730 [2024-07-15 20:27:09.893152] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:13:17.730 [2024-07-15 20:27:09.893159] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:13:17.730 [2024-07-15 20:27:09.894155] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:17.730 [2024-07-15 20:27:09.894165] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:17.730 [2024-07-15 20:27:09.895158] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:17.730 [2024-07-15 20:27:09.895167] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:13:17.730 [2024-07-15 20:27:09.895172] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:13:17.730 [2024-07-15 20:27:09.895178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:17.730 [2024-07-15 20:27:09.895284] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:13:17.730 [2024-07-15 20:27:09.895290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:17.731 [2024-07-15 20:27:09.895295] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:17.731 [2024-07-15 20:27:09.896165] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:17.731 [2024-07-15 20:27:09.897169] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:17.731 [2024-07-15 20:27:09.898173] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:17.731 [2024-07-15 20:27:09.899174] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:17.731 [2024-07-15 20:27:09.899213] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:17.731 [2024-07-15 20:27:09.900188] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:17.731 [2024-07-15 20:27:09.900197] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:17.731 [2024-07-15 20:27:09.900202] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.900223] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:13:17.731 [2024-07-15 20:27:09.900234] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.900247] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.731 [2024-07-15 20:27:09.900253] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.731 [2024-07-15 20:27:09.900265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.908238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.908253] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:13:17.731 [2024-07-15 20:27:09.908259] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:13:17.731 [2024-07-15 20:27:09.908266] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:13:17.731 [2024-07-15 20:27:09.908271] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:17.731 [2024-07-15 20:27:09.908276] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:13:17.731 [2024-07-15 20:27:09.908280] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:13:17.731 [2024-07-15 20:27:09.908285] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.908293] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.908304] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.916236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.916252] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.731 [2024-07-15 20:27:09.916261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.731 [2024-07-15 20:27:09.916269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.731 [2024-07-15 20:27:09.916278] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:17.731 [2024-07-15 20:27:09.916282] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.916290] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.916299] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.924236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.924244] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:13:17.731 [2024-07-15 20:27:09.924249] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.924256] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.924262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.924270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.932237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.932300] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.932309] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.932319] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:17.731 [2024-07-15 20:27:09.932324] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:17.731 [2024-07-15 20:27:09.932330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.940235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.940247] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:13:17.731 [2024-07-15 20:27:09.940259] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.940267] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.940274] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.731 [2024-07-15 20:27:09.940278] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.731 [2024-07-15 20:27:09.940284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.948236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.948251] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.948258] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.948266] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:17.731 [2024-07-15 20:27:09.948270] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.731 [2024-07-15 20:27:09.948276] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.956237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.956246] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.956253] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.956262] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.956268] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.956274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.956279] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.956284] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:13:17.731 [2024-07-15 20:27:09.956288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:13:17.731 [2024-07-15 20:27:09.956294] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:13:17.731 [2024-07-15 20:27:09.956314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.964238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.964252] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.972236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.972250] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.980235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.980249] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.988238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:17.731 [2024-07-15 20:27:09.988254] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:17.731 [2024-07-15 20:27:09.988259] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:17.731 [2024-07-15 20:27:09.988263] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:17.731 [2024-07-15 20:27:09.988267] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:17.731 [2024-07-15 20:27:09.988273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:17.731 [2024-07-15 20:27:09.988282] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:17.731 [2024-07-15 20:27:09.988287] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:17.731 [2024-07-15 20:27:09.988293] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:17.731 [2024-07-15 20:27:09.988300] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:17.732 [2024-07-15 20:27:09.988304] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:17.732 [2024-07-15 20:27:09.988311] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:17.732 [2024-07-15 20:27:09.988318] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:17.732 [2024-07-15 20:27:09.988323] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:17.732 [2024-07-15 20:27:09.988329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:17.732 [2024-07-15 20:27:09.996238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:17.732 [2024-07-15 20:27:09.996253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:17.732 [2024-07-15 20:27:09.996263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:17.732 [2024-07-15 20:27:09.996270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:17.732 ===================================================== 00:13:17.732 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:17.732 ===================================================== 00:13:17.732 Controller Capabilities/Features 00:13:17.732 ================================ 00:13:17.732 Vendor ID: 4e58 00:13:17.732 Subsystem Vendor ID: 4e58 00:13:17.732 Serial Number: SPDK2 00:13:17.732 Model Number: SPDK bdev Controller 00:13:17.732 Firmware Version: 24.09 00:13:17.732 Recommended Arb Burst: 6 00:13:17.732 IEEE OUI Identifier: 8d 6b 50 00:13:17.732 Multi-path I/O 00:13:17.732 May have multiple subsystem ports: Yes 00:13:17.732 May have multiple controllers: Yes 00:13:17.732 Associated with SR-IOV VF: No 00:13:17.732 Max Data Transfer Size: 131072 00:13:17.732 Max Number of Namespaces: 32 00:13:17.732 Max Number of I/O Queues: 127 00:13:17.732 NVMe Specification Version (VS): 1.3 00:13:17.732 NVMe Specification Version (Identify): 1.3 00:13:17.732 Maximum Queue Entries: 256 00:13:17.732 Contiguous Queues Required: Yes 00:13:17.732 Arbitration Mechanisms Supported 00:13:17.732 Weighted Round Robin: Not Supported 00:13:17.732 Vendor Specific: Not Supported 00:13:17.732 Reset Timeout: 15000 ms 00:13:17.732 Doorbell Stride: 4 bytes 00:13:17.732 NVM Subsystem Reset: Not Supported 00:13:17.732 Command Sets Supported 00:13:17.732 NVM Command Set: Supported 00:13:17.732 Boot Partition: Not Supported 00:13:17.732 Memory Page Size Minimum: 4096 bytes 00:13:17.732 Memory Page Size Maximum: 4096 bytes 00:13:17.732 Persistent Memory Region: Not Supported 00:13:17.732 Optional Asynchronous Events Supported 00:13:17.732 Namespace Attribute Notices: Supported 00:13:17.732 Firmware Activation Notices: Not Supported 00:13:17.732 ANA Change Notices: Not Supported 00:13:17.732 PLE Aggregate Log Change Notices: Not Supported 00:13:17.732 LBA Status Info Alert Notices: Not Supported 00:13:17.732 EGE Aggregate Log Change Notices: Not Supported 00:13:17.732 Normal NVM Subsystem Shutdown event: Not Supported 00:13:17.732 Zone Descriptor Change Notices: Not Supported 00:13:17.732 Discovery Log Change Notices: Not Supported 00:13:17.732 Controller Attributes 00:13:17.732 128-bit Host Identifier: Supported 00:13:17.732 Non-Operational Permissive Mode: Not Supported 00:13:17.732 NVM Sets: Not Supported 00:13:17.732 Read Recovery Levels: Not Supported 00:13:17.732 Endurance Groups: Not Supported 00:13:17.732 Predictable Latency Mode: Not Supported 00:13:17.732 Traffic Based Keep ALive: Not Supported 00:13:17.732 Namespace Granularity: Not Supported 00:13:17.732 SQ Associations: Not Supported 00:13:17.732 UUID List: Not Supported 00:13:17.732 Multi-Domain Subsystem: Not Supported 00:13:17.732 Fixed Capacity Management: Not Supported 00:13:17.732 Variable Capacity Management: Not Supported 00:13:17.732 Delete Endurance Group: Not Supported 00:13:17.732 Delete NVM Set: Not Supported 00:13:17.732 Extended LBA Formats Supported: Not Supported 00:13:17.732 Flexible Data Placement Supported: Not Supported 00:13:17.732 00:13:17.732 Controller Memory Buffer Support 00:13:17.732 ================================ 00:13:17.732 Supported: No 00:13:17.732 00:13:17.732 Persistent Memory Region Support 00:13:17.732 ================================ 00:13:17.732 Supported: No 00:13:17.732 00:13:17.732 Admin Command Set Attributes 00:13:17.732 ============================ 00:13:17.732 Security Send/Receive: Not Supported 00:13:17.732 Format NVM: Not Supported 00:13:17.732 Firmware Activate/Download: Not Supported 00:13:17.732 Namespace Management: Not Supported 00:13:17.732 Device Self-Test: Not Supported 00:13:17.732 Directives: Not Supported 00:13:17.732 NVMe-MI: Not Supported 00:13:17.732 Virtualization Management: Not Supported 00:13:17.732 Doorbell Buffer Config: Not Supported 00:13:17.732 Get LBA Status Capability: Not Supported 00:13:17.732 Command & Feature Lockdown Capability: Not Supported 00:13:17.732 Abort Command Limit: 4 00:13:17.732 Async Event Request Limit: 4 00:13:17.732 Number of Firmware Slots: N/A 00:13:17.732 Firmware Slot 1 Read-Only: N/A 00:13:17.732 Firmware Activation Without Reset: N/A 00:13:17.732 Multiple Update Detection Support: N/A 00:13:17.732 Firmware Update Granularity: No Information Provided 00:13:17.732 Per-Namespace SMART Log: No 00:13:17.732 Asymmetric Namespace Access Log Page: Not Supported 00:13:17.732 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:17.732 Command Effects Log Page: Supported 00:13:17.732 Get Log Page Extended Data: Supported 00:13:17.732 Telemetry Log Pages: Not Supported 00:13:17.732 Persistent Event Log Pages: Not Supported 00:13:17.732 Supported Log Pages Log Page: May Support 00:13:17.732 Commands Supported & Effects Log Page: Not Supported 00:13:17.732 Feature Identifiers & Effects Log Page:May Support 00:13:17.732 NVMe-MI Commands & Effects Log Page: May Support 00:13:17.732 Data Area 4 for Telemetry Log: Not Supported 00:13:17.732 Error Log Page Entries Supported: 128 00:13:17.732 Keep Alive: Supported 00:13:17.732 Keep Alive Granularity: 10000 ms 00:13:17.732 00:13:17.732 NVM Command Set Attributes 00:13:17.732 ========================== 00:13:17.732 Submission Queue Entry Size 00:13:17.732 Max: 64 00:13:17.732 Min: 64 00:13:17.732 Completion Queue Entry Size 00:13:17.732 Max: 16 00:13:17.732 Min: 16 00:13:17.732 Number of Namespaces: 32 00:13:17.732 Compare Command: Supported 00:13:17.732 Write Uncorrectable Command: Not Supported 00:13:17.732 Dataset Management Command: Supported 00:13:17.732 Write Zeroes Command: Supported 00:13:17.732 Set Features Save Field: Not Supported 00:13:17.732 Reservations: Not Supported 00:13:17.732 Timestamp: Not Supported 00:13:17.732 Copy: Supported 00:13:17.732 Volatile Write Cache: Present 00:13:17.732 Atomic Write Unit (Normal): 1 00:13:17.732 Atomic Write Unit (PFail): 1 00:13:17.732 Atomic Compare & Write Unit: 1 00:13:17.732 Fused Compare & Write: Supported 00:13:17.732 Scatter-Gather List 00:13:17.732 SGL Command Set: Supported (Dword aligned) 00:13:17.732 SGL Keyed: Not Supported 00:13:17.732 SGL Bit Bucket Descriptor: Not Supported 00:13:17.732 SGL Metadata Pointer: Not Supported 00:13:17.732 Oversized SGL: Not Supported 00:13:17.732 SGL Metadata Address: Not Supported 00:13:17.732 SGL Offset: Not Supported 00:13:17.732 Transport SGL Data Block: Not Supported 00:13:17.732 Replay Protected Memory Block: Not Supported 00:13:17.732 00:13:17.732 Firmware Slot Information 00:13:17.732 ========================= 00:13:17.732 Active slot: 1 00:13:17.732 Slot 1 Firmware Revision: 24.09 00:13:17.732 00:13:17.732 00:13:17.732 Commands Supported and Effects 00:13:17.732 ============================== 00:13:17.732 Admin Commands 00:13:17.732 -------------- 00:13:17.732 Get Log Page (02h): Supported 00:13:17.732 Identify (06h): Supported 00:13:17.732 Abort (08h): Supported 00:13:17.732 Set Features (09h): Supported 00:13:17.732 Get Features (0Ah): Supported 00:13:17.732 Asynchronous Event Request (0Ch): Supported 00:13:17.732 Keep Alive (18h): Supported 00:13:17.732 I/O Commands 00:13:17.732 ------------ 00:13:17.732 Flush (00h): Supported LBA-Change 00:13:17.732 Write (01h): Supported LBA-Change 00:13:17.732 Read (02h): Supported 00:13:17.732 Compare (05h): Supported 00:13:17.732 Write Zeroes (08h): Supported LBA-Change 00:13:17.732 Dataset Management (09h): Supported LBA-Change 00:13:17.732 Copy (19h): Supported LBA-Change 00:13:17.732 00:13:17.732 Error Log 00:13:17.732 ========= 00:13:17.732 00:13:17.732 Arbitration 00:13:17.732 =========== 00:13:17.732 Arbitration Burst: 1 00:13:17.732 00:13:17.732 Power Management 00:13:17.732 ================ 00:13:17.732 Number of Power States: 1 00:13:17.732 Current Power State: Power State #0 00:13:17.732 Power State #0: 00:13:17.732 Max Power: 0.00 W 00:13:17.732 Non-Operational State: Operational 00:13:17.732 Entry Latency: Not Reported 00:13:17.732 Exit Latency: Not Reported 00:13:17.732 Relative Read Throughput: 0 00:13:17.732 Relative Read Latency: 0 00:13:17.732 Relative Write Throughput: 0 00:13:17.732 Relative Write Latency: 0 00:13:17.732 Idle Power: Not Reported 00:13:17.732 Active Power: Not Reported 00:13:17.732 Non-Operational Permissive Mode: Not Supported 00:13:17.732 00:13:17.732 Health Information 00:13:17.733 ================== 00:13:17.733 Critical Warnings: 00:13:17.733 Available Spare Space: OK 00:13:17.733 Temperature: OK 00:13:17.733 Device Reliability: OK 00:13:17.733 Read Only: No 00:13:17.733 Volatile Memory Backup: OK 00:13:17.733 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:17.733 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:17.733 Available Spare: 0% 00:13:17.733 Available Sp[2024-07-15 20:27:09.996373] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:17.733 [2024-07-15 20:27:10.004237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:17.733 [2024-07-15 20:27:10.004275] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:13:17.733 [2024-07-15 20:27:10.004284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.733 [2024-07-15 20:27:10.004291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.733 [2024-07-15 20:27:10.004297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.733 [2024-07-15 20:27:10.004303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:17.733 [2024-07-15 20:27:10.004536] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:17.733 [2024-07-15 20:27:10.004548] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:17.733 [2024-07-15 20:27:10.005548] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:17.733 [2024-07-15 20:27:10.005597] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:13:17.733 [2024-07-15 20:27:10.005605] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:13:17.733 [2024-07-15 20:27:10.006554] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:17.733 [2024-07-15 20:27:10.006566] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:13:17.733 [2024-07-15 20:27:10.006613] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:17.733 [2024-07-15 20:27:10.007988] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:17.733 are Threshold: 0% 00:13:17.733 Life Percentage Used: 0% 00:13:17.733 Data Units Read: 0 00:13:17.733 Data Units Written: 0 00:13:17.733 Host Read Commands: 0 00:13:17.733 Host Write Commands: 0 00:13:17.733 Controller Busy Time: 0 minutes 00:13:17.733 Power Cycles: 0 00:13:17.733 Power On Hours: 0 hours 00:13:17.733 Unsafe Shutdowns: 0 00:13:17.733 Unrecoverable Media Errors: 0 00:13:17.733 Lifetime Error Log Entries: 0 00:13:17.733 Warning Temperature Time: 0 minutes 00:13:17.733 Critical Temperature Time: 0 minutes 00:13:17.733 00:13:17.733 Number of Queues 00:13:17.733 ================ 00:13:17.733 Number of I/O Submission Queues: 127 00:13:17.733 Number of I/O Completion Queues: 127 00:13:17.733 00:13:17.733 Active Namespaces 00:13:17.733 ================= 00:13:17.733 Namespace ID:1 00:13:17.733 Error Recovery Timeout: Unlimited 00:13:17.733 Command Set Identifier: NVM (00h) 00:13:17.733 Deallocate: Supported 00:13:17.733 Deallocated/Unwritten Error: Not Supported 00:13:17.733 Deallocated Read Value: Unknown 00:13:17.733 Deallocate in Write Zeroes: Not Supported 00:13:17.733 Deallocated Guard Field: 0xFFFF 00:13:17.733 Flush: Supported 00:13:17.733 Reservation: Supported 00:13:17.733 Namespace Sharing Capabilities: Multiple Controllers 00:13:17.733 Size (in LBAs): 131072 (0GiB) 00:13:17.733 Capacity (in LBAs): 131072 (0GiB) 00:13:17.733 Utilization (in LBAs): 131072 (0GiB) 00:13:17.733 NGUID: 448FEA17E58E4641BA2474A639C069D4 00:13:17.733 UUID: 448fea17-e58e-4641-ba24-74a639c069d4 00:13:17.733 Thin Provisioning: Not Supported 00:13:17.733 Per-NS Atomic Units: Yes 00:13:17.733 Atomic Boundary Size (Normal): 0 00:13:17.733 Atomic Boundary Size (PFail): 0 00:13:17.733 Atomic Boundary Offset: 0 00:13:17.733 Maximum Single Source Range Length: 65535 00:13:17.733 Maximum Copy Length: 65535 00:13:17.733 Maximum Source Range Count: 1 00:13:17.733 NGUID/EUI64 Never Reused: No 00:13:17.733 Namespace Write Protected: No 00:13:17.733 Number of LBA Formats: 1 00:13:17.733 Current LBA Format: LBA Format #00 00:13:17.733 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:17.733 00:13:17.733 20:27:10 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:17.733 EAL: No free 2048 kB hugepages reported on node 1 00:13:17.994 [2024-07-15 20:27:10.192606] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:23.281 Initializing NVMe Controllers 00:13:23.281 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:23.281 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:23.281 Initialization complete. Launching workers. 00:13:23.281 ======================================================== 00:13:23.281 Latency(us) 00:13:23.281 Device Information : IOPS MiB/s Average min max 00:13:23.281 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39973.00 156.14 3204.55 831.28 6821.65 00:13:23.281 ======================================================== 00:13:23.281 Total : 39973.00 156.14 3204.55 831.28 6821.65 00:13:23.281 00:13:23.281 [2024-07-15 20:27:15.301411] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:23.281 20:27:15 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:23.281 EAL: No free 2048 kB hugepages reported on node 1 00:13:23.281 [2024-07-15 20:27:15.477981] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:28.569 Initializing NVMe Controllers 00:13:28.569 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:28.569 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:28.569 Initialization complete. Launching workers. 00:13:28.569 ======================================================== 00:13:28.569 Latency(us) 00:13:28.569 Device Information : IOPS MiB/s Average min max 00:13:28.569 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35785.35 139.79 3576.38 1094.95 7266.26 00:13:28.569 ======================================================== 00:13:28.569 Total : 35785.35 139.79 3576.38 1094.95 7266.26 00:13:28.569 00:13:28.569 [2024-07-15 20:27:20.498052] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:28.569 20:27:20 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:28.569 EAL: No free 2048 kB hugepages reported on node 1 00:13:28.569 [2024-07-15 20:27:20.690207] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:33.851 [2024-07-15 20:27:25.820308] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:33.851 Initializing NVMe Controllers 00:13:33.851 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:33.851 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:33.851 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:33.851 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:33.851 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:33.851 Initialization complete. Launching workers. 00:13:33.851 Starting thread on core 2 00:13:33.851 Starting thread on core 3 00:13:33.851 Starting thread on core 1 00:13:33.851 20:27:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:33.851 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.851 [2024-07-15 20:27:26.085701] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:37.141 [2024-07-15 20:27:29.146670] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:37.141 Initializing NVMe Controllers 00:13:37.141 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.141 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.141 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:37.141 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:37.141 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:37.141 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:37.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:37.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:37.141 Initialization complete. Launching workers. 00:13:37.141 Starting thread on core 1 with urgent priority queue 00:13:37.141 Starting thread on core 2 with urgent priority queue 00:13:37.141 Starting thread on core 3 with urgent priority queue 00:13:37.141 Starting thread on core 0 with urgent priority queue 00:13:37.141 SPDK bdev Controller (SPDK2 ) core 0: 10795.00 IO/s 9.26 secs/100000 ios 00:13:37.141 SPDK bdev Controller (SPDK2 ) core 1: 8061.67 IO/s 12.40 secs/100000 ios 00:13:37.141 SPDK bdev Controller (SPDK2 ) core 2: 8047.33 IO/s 12.43 secs/100000 ios 00:13:37.141 SPDK bdev Controller (SPDK2 ) core 3: 8077.67 IO/s 12.38 secs/100000 ios 00:13:37.141 ======================================================== 00:13:37.141 00:13:37.141 20:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:37.141 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.141 [2024-07-15 20:27:29.416692] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:37.141 Initializing NVMe Controllers 00:13:37.141 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.141 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:37.141 Namespace ID: 1 size: 0GB 00:13:37.141 Initialization complete. 00:13:37.141 INFO: using host memory buffer for IO 00:13:37.141 Hello world! 00:13:37.141 [2024-07-15 20:27:29.425746] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:37.141 20:27:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:37.401 EAL: No free 2048 kB hugepages reported on node 1 00:13:37.401 [2024-07-15 20:27:29.698173] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:38.779 Initializing NVMe Controllers 00:13:38.779 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:38.779 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:38.779 Initialization complete. Launching workers. 00:13:38.779 submit (in ns) avg, min, max = 8278.6, 3932.5, 3999915.8 00:13:38.779 complete (in ns) avg, min, max = 17856.9, 2382.5, 4994275.0 00:13:38.779 00:13:38.779 Submit histogram 00:13:38.779 ================ 00:13:38.779 Range in us Cumulative Count 00:13:38.779 3.920 - 3.947: 0.2452% ( 47) 00:13:38.779 3.947 - 3.973: 2.9581% ( 520) 00:13:38.779 3.973 - 4.000: 8.6342% ( 1088) 00:13:38.779 4.000 - 4.027: 17.6336% ( 1725) 00:13:38.779 4.027 - 4.053: 28.8032% ( 2141) 00:13:38.779 4.053 - 4.080: 40.9380% ( 2326) 00:13:38.779 4.080 - 4.107: 55.3579% ( 2764) 00:13:38.779 4.107 - 4.133: 71.2594% ( 3048) 00:13:38.779 4.133 - 4.160: 84.0724% ( 2456) 00:13:38.779 4.160 - 4.187: 92.8422% ( 1681) 00:13:38.779 4.187 - 4.213: 96.7654% ( 752) 00:13:38.779 4.213 - 4.240: 98.4558% ( 324) 00:13:38.779 4.240 - 4.267: 98.9722% ( 99) 00:13:38.779 4.267 - 4.293: 99.1340% ( 31) 00:13:38.779 4.293 - 4.320: 99.1601% ( 5) 00:13:38.779 4.320 - 4.347: 99.1705% ( 2) 00:13:38.780 4.347 - 4.373: 99.1757% ( 1) 00:13:38.780 4.427 - 4.453: 99.1914% ( 3) 00:13:38.780 4.453 - 4.480: 99.2122% ( 4) 00:13:38.780 4.480 - 4.507: 99.2331% ( 4) 00:13:38.780 4.507 - 4.533: 99.2383% ( 1) 00:13:38.780 4.533 - 4.560: 99.2435% ( 1) 00:13:38.780 4.560 - 4.587: 99.2487% ( 1) 00:13:38.780 4.587 - 4.613: 99.2592% ( 2) 00:13:38.780 4.613 - 4.640: 99.2644% ( 1) 00:13:38.780 4.640 - 4.667: 99.2801% ( 3) 00:13:38.780 4.667 - 4.693: 99.2905% ( 2) 00:13:38.780 4.747 - 4.773: 99.3009% ( 2) 00:13:38.780 4.773 - 4.800: 99.3114% ( 2) 00:13:38.780 4.800 - 4.827: 99.3218% ( 2) 00:13:38.780 4.827 - 4.853: 99.3322% ( 2) 00:13:38.780 4.853 - 4.880: 99.3374% ( 1) 00:13:38.780 4.880 - 4.907: 99.3427% ( 1) 00:13:38.780 5.067 - 5.093: 99.3531% ( 2) 00:13:38.780 5.093 - 5.120: 99.3583% ( 1) 00:13:38.780 5.120 - 5.147: 99.3687% ( 2) 00:13:38.780 5.147 - 5.173: 99.3792% ( 2) 00:13:38.780 5.173 - 5.200: 99.3844% ( 1) 00:13:38.780 5.227 - 5.253: 99.3948% ( 2) 00:13:38.780 5.333 - 5.360: 99.4053% ( 2) 00:13:38.780 5.360 - 5.387: 99.4209% ( 3) 00:13:38.780 5.387 - 5.413: 99.4261% ( 1) 00:13:38.780 5.413 - 5.440: 99.4366% ( 2) 00:13:38.780 5.467 - 5.493: 99.4470% ( 2) 00:13:38.780 5.493 - 5.520: 99.4574% ( 2) 00:13:38.780 5.547 - 5.573: 99.4679% ( 2) 00:13:38.780 5.600 - 5.627: 99.4731% ( 1) 00:13:38.780 5.627 - 5.653: 99.4783% ( 1) 00:13:38.780 5.680 - 5.707: 99.4835% ( 1) 00:13:38.780 5.733 - 5.760: 99.4887% ( 1) 00:13:38.780 5.787 - 5.813: 99.5044% ( 3) 00:13:38.780 5.813 - 5.840: 99.5096% ( 1) 00:13:38.780 5.840 - 5.867: 99.5148% ( 1) 00:13:38.780 5.867 - 5.893: 99.5200% ( 1) 00:13:38.780 5.920 - 5.947: 99.5253% ( 1) 00:13:38.780 5.947 - 5.973: 99.5305% ( 1) 00:13:38.780 6.000 - 6.027: 99.5409% ( 2) 00:13:38.780 6.027 - 6.053: 99.5461% ( 1) 00:13:38.780 6.053 - 6.080: 99.5513% ( 1) 00:13:38.780 6.107 - 6.133: 99.5670% ( 3) 00:13:38.780 6.133 - 6.160: 99.5722% ( 1) 00:13:38.780 6.160 - 6.187: 99.5826% ( 2) 00:13:38.780 6.187 - 6.213: 99.5931% ( 2) 00:13:38.780 6.240 - 6.267: 99.6035% ( 2) 00:13:38.780 6.267 - 6.293: 99.6087% ( 1) 00:13:38.780 6.293 - 6.320: 99.6139% ( 1) 00:13:38.780 6.320 - 6.347: 99.6296% ( 3) 00:13:38.780 6.400 - 6.427: 99.6348% ( 1) 00:13:38.780 6.427 - 6.453: 99.6452% ( 2) 00:13:38.780 6.480 - 6.507: 99.6505% ( 1) 00:13:38.780 6.507 - 6.533: 99.6661% ( 3) 00:13:38.780 6.613 - 6.640: 99.6713% ( 1) 00:13:38.780 6.667 - 6.693: 99.6818% ( 2) 00:13:38.780 6.693 - 6.720: 99.6870% ( 1) 00:13:38.780 6.720 - 6.747: 99.6922% ( 1) 00:13:38.780 6.747 - 6.773: 99.6974% ( 1) 00:13:38.780 6.827 - 6.880: 99.7026% ( 1) 00:13:38.780 6.880 - 6.933: 99.7183% ( 3) 00:13:38.780 6.933 - 6.987: 99.7287% ( 2) 00:13:38.780 6.987 - 7.040: 99.7444% ( 3) 00:13:38.780 7.040 - 7.093: 99.7496% ( 1) 00:13:38.780 7.147 - 7.200: 99.7652% ( 3) 00:13:38.780 7.200 - 7.253: 99.7757% ( 2) 00:13:38.780 7.253 - 7.307: 99.7809% ( 1) 00:13:38.780 7.360 - 7.413: 99.7913% ( 2) 00:13:38.780 7.413 - 7.467: 99.8070% ( 3) 00:13:38.780 7.467 - 7.520: 99.8122% ( 1) 00:13:38.780 7.520 - 7.573: 99.8174% ( 1) 00:13:38.780 7.680 - 7.733: 99.8226% ( 1) 00:13:38.780 7.733 - 7.787: 99.8278% ( 1) 00:13:38.780 7.787 - 7.840: 99.8331% ( 1) 00:13:38.780 7.947 - 8.000: 99.8383% ( 1) 00:13:38.780 8.160 - 8.213: 99.8487% ( 2) 00:13:38.780 8.427 - 8.480: 99.8539% ( 1) 00:13:38.780 9.013 - 9.067: 99.8591% ( 1) 00:13:38.780 9.067 - 9.120: 99.8644% ( 1) 00:13:38.780 10.453 - 10.507: 99.8696% ( 1) 00:13:38.780 12.587 - 12.640: 99.8748% ( 1) 00:13:38.780 17.067 - 17.173: 99.8800% ( 1) 00:13:38.780 18.347 - 18.453: 99.8852% ( 1) 00:13:38.780 39.040 - 39.253: 99.8904% ( 1) 00:13:38.780 39.253 - 39.467: 99.8957% ( 1) 00:13:38.780 3986.773 - 4014.080: 100.0000% ( 20) 00:13:38.780 00:13:38.780 Complete histogram 00:13:38.780 ================== 00:13:38.780 Range in us Cumulative Count 00:13:38.780 2.373 - 2.387: 0.0052% ( 1) 00:13:38.780 2.387 - 2.400: 0.4330% ( 82) 00:13:38.780 2.400 - 2.413: 1.0278% ( 114) 00:13:38.780 2.413 - 2.427: 1.1738% ( 28) 00:13:38.780 2.427 - 2.440: 41.3658% ( 7704) 00:13:38.780 2.440 - 2.453: 55.3892% ( 2688) 00:13:38.780 2.453 - 2.467: 69.5586% ( 2716) 00:13:38.780 2.467 - 2.480: 78.3545% ( 1686) 00:13:38.780 2.480 - 2.493: 81.4535% ( 594) 00:13:38.780 2.493 - 2.507: 83.8585% ( 461) 00:13:38.780 2.507 - 2.520: 89.1121% ( 1007) 00:13:38.780 2.520 - 2.533: 93.7187% ( 883) 00:13:38.780 2.533 - 2.547: 96.3116% ( 497) 00:13:38.780 2.547 - 2.560: 98.0854% ( 340) 00:13:38.780 2.560 - 2.573: 98.7740% ( 132) 00:13:38.780 2.573 - 2.587: 98.9305% ( 30) 00:13:38.780 2.587 - 2.600: 98.9618% ( 6) 00:13:38.780 2.600 - 2.613: 98.9827% ( 4) 00:13:38.780 2.613 - 2.627: 98.9879% ( 1) 00:13:38.780 2.627 - 2.640: 98.9983% ( 2) 00:13:38.780 2.640 - 2.653: 99.0140% ( 3) 00:13:38.780 2.653 - 2.667: 99.0192% ( 1) 00:13:38.780 2.667 - 2.680: 99.0244% ( 1) 00:13:38.780 2.680 - 2.693: 99.0296% ( 1) 00:13:38.780 2.707 - 2.720: 99.0348% ( 1) 00:13:38.780 2.720 - 2.733: 99.0401% ( 1) 00:13:38.780 2.760 - 2.773: 99.0453% ( 1) 00:13:38.780 2.787 - 2.800: 99.0609% ( 3) 00:13:38.780 2.800 - 2.813: 99.0662% ( 1) 00:13:38.780 2.813 - 2.827: 99.0766% ( 2) 00:13:38.780 2.827 - 2.840: 99.0922% ( 3) 00:13:38.780 2.840 - 2.853: 99.0975% ( 1) 00:13:38.780 2.947 - 2.960: 99.1027% ( 1) 00:13:38.780 2.973 - 2.987: 99.1079% ( 1) 00:13:38.780 3.040 - 3.053: 99.1131% ( 1) 00:13:38.780 3.053 - 3.067: 99.1183% ( 1) 00:13:38.780 3.067 - 3.080: 99.1235% ( 1) 00:13:38.780 3.080 - 3.093: 99.1288% ( 1) 00:13:38.780 3.200 - 3.213: 99.1340% ( 1) 00:13:38.780 3.227 - 3.240: 99.1392% ( 1) 00:13:38.780 3.293 - 3.307: 99.1444% ( 1) 00:13:38.780 3.320 - 3.333: 99.1496% ( 1) 00:13:38.780 3.387 - 3.400: 99.1548% ( 1) 00:13:38.780 3.400 - 3.413: 99.1653% ( 2) 00:13:38.780 3.413 - 3.440: 99.1705% ( 1) 00:13:38.780 3.440 - 3.467: 99.1861% ( 3) 00:13:38.780 3.467 - 3.493: 99.1914% ( 1) 00:13:38.780 3.493 - 3.520: 99.1966% ( 1) 00:13:38.780 3.520 - 3.547: 99.2018% ( 1) 00:13:38.780 3.547 - 3.573: 99.2070% ( 1) 00:13:38.780 3.600 - 3.627: 99.2122% ( 1) 00:13:38.780 3.653 - 3.680: 99.2174% ( 1) 00:13:38.780 3.760 - 3.787: 99.2279% ( 2) 00:13:38.780 3.787 - 3.813: 99.2383% ( 2) 00:13:38.780 3.813 - 3.840: 99.2435% ( 1) 00:13:38.780 3.893 - 3.920: 99.2487% ( 1) 00:13:38.780 4.000 - 4.027: 99.2540% ( 1) 00:13:38.780 4.053 - 4.080: 99.2592% ( 1) 00:13:38.780 4.240 - 4.267: 99.2644% ( 1) 00:13:38.780 4.267 - 4.293: 99.2696% ( 1) 00:13:38.780 4.293 - 4.320: 99.2748% ( 1) 00:13:38.781 4.320 - 4.3[2024-07-15 20:27:30.801006] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:38.781 47: 99.2801% ( 1) 00:13:38.781 4.347 - 4.373: 99.2853% ( 1) 00:13:38.781 4.373 - 4.400: 99.2905% ( 1) 00:13:38.781 4.400 - 4.427: 99.2957% ( 1) 00:13:38.781 4.480 - 4.507: 99.3009% ( 1) 00:13:38.781 4.533 - 4.560: 99.3061% ( 1) 00:13:38.781 4.693 - 4.720: 99.3166% ( 2) 00:13:38.781 4.747 - 4.773: 99.3218% ( 1) 00:13:38.781 4.773 - 4.800: 99.3270% ( 1) 00:13:38.781 4.827 - 4.853: 99.3374% ( 2) 00:13:38.781 4.907 - 4.933: 99.3427% ( 1) 00:13:38.781 5.013 - 5.040: 99.3479% ( 1) 00:13:38.781 5.067 - 5.093: 99.3531% ( 1) 00:13:38.781 5.093 - 5.120: 99.3583% ( 1) 00:13:38.781 5.120 - 5.147: 99.3687% ( 2) 00:13:38.781 5.280 - 5.307: 99.3740% ( 1) 00:13:38.781 5.333 - 5.360: 99.3792% ( 1) 00:13:38.781 5.467 - 5.493: 99.3844% ( 1) 00:13:38.781 5.573 - 5.600: 99.4000% ( 3) 00:13:38.781 5.600 - 5.627: 99.4053% ( 1) 00:13:38.781 5.627 - 5.653: 99.4105% ( 1) 00:13:38.781 5.707 - 5.733: 99.4157% ( 1) 00:13:38.781 5.733 - 5.760: 99.4313% ( 3) 00:13:38.781 5.787 - 5.813: 99.4366% ( 1) 00:13:38.781 5.813 - 5.840: 99.4418% ( 1) 00:13:38.781 5.840 - 5.867: 99.4470% ( 1) 00:13:38.781 5.867 - 5.893: 99.4574% ( 2) 00:13:38.781 5.893 - 5.920: 99.4626% ( 1) 00:13:38.781 6.000 - 6.027: 99.4679% ( 1) 00:13:38.781 6.107 - 6.133: 99.4731% ( 1) 00:13:38.781 6.240 - 6.267: 99.4783% ( 1) 00:13:38.781 6.293 - 6.320: 99.4835% ( 1) 00:13:38.781 6.693 - 6.720: 99.4939% ( 2) 00:13:38.781 6.880 - 6.933: 99.4992% ( 1) 00:13:38.781 8.747 - 8.800: 99.5044% ( 1) 00:13:38.781 9.173 - 9.227: 99.5096% ( 1) 00:13:38.781 9.920 - 9.973: 99.5148% ( 1) 00:13:38.781 10.347 - 10.400: 99.5200% ( 1) 00:13:38.781 10.880 - 10.933: 99.5305% ( 2) 00:13:38.781 11.573 - 11.627: 99.5357% ( 1) 00:13:38.781 12.800 - 12.853: 99.5409% ( 1) 00:13:38.781 13.013 - 13.067: 99.5461% ( 1) 00:13:38.781 13.653 - 13.760: 99.5513% ( 1) 00:13:38.781 14.187 - 14.293: 99.5618% ( 2) 00:13:38.781 16.533 - 16.640: 99.5670% ( 1) 00:13:38.781 17.387 - 17.493: 99.5722% ( 1) 00:13:38.781 21.333 - 21.440: 99.5774% ( 1) 00:13:38.781 22.507 - 22.613: 99.5826% ( 1) 00:13:38.781 23.040 - 23.147: 99.5879% ( 1) 00:13:38.781 29.227 - 29.440: 99.5931% ( 1) 00:13:38.781 31.147 - 31.360: 99.5983% ( 1) 00:13:38.781 35.627 - 35.840: 99.6035% ( 1) 00:13:38.781 40.320 - 40.533: 99.6087% ( 1) 00:13:38.781 44.160 - 44.373: 99.6139% ( 1) 00:13:38.781 2648.747 - 2662.400: 99.6192% ( 1) 00:13:38.781 3304.107 - 3317.760: 99.6244% ( 1) 00:13:38.781 3986.773 - 4014.080: 99.9948% ( 71) 00:13:38.781 4969.813 - 4997.120: 100.0000% ( 1) 00:13:38.781 00:13:38.781 20:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:38.781 20:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:38.781 20:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:38.781 20:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:38.781 20:27:30 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:38.781 [ 00:13:38.781 { 00:13:38.781 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:38.781 "subtype": "Discovery", 00:13:38.781 "listen_addresses": [], 00:13:38.781 "allow_any_host": true, 00:13:38.781 "hosts": [] 00:13:38.781 }, 00:13:38.781 { 00:13:38.781 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:38.781 "subtype": "NVMe", 00:13:38.781 "listen_addresses": [ 00:13:38.781 { 00:13:38.781 "trtype": "VFIOUSER", 00:13:38.781 "adrfam": "IPv4", 00:13:38.781 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:38.781 "trsvcid": "0" 00:13:38.781 } 00:13:38.781 ], 00:13:38.781 "allow_any_host": true, 00:13:38.781 "hosts": [], 00:13:38.781 "serial_number": "SPDK1", 00:13:38.781 "model_number": "SPDK bdev Controller", 00:13:38.781 "max_namespaces": 32, 00:13:38.781 "min_cntlid": 1, 00:13:38.781 "max_cntlid": 65519, 00:13:38.781 "namespaces": [ 00:13:38.781 { 00:13:38.781 "nsid": 1, 00:13:38.781 "bdev_name": "Malloc1", 00:13:38.781 "name": "Malloc1", 00:13:38.781 "nguid": "88A765369D2A4256B8D84771710F8FA0", 00:13:38.781 "uuid": "88a76536-9d2a-4256-b8d8-4771710f8fa0" 00:13:38.781 }, 00:13:38.781 { 00:13:38.781 "nsid": 2, 00:13:38.781 "bdev_name": "Malloc3", 00:13:38.781 "name": "Malloc3", 00:13:38.781 "nguid": "6FDD27A6CE804A31918F4F5800F866BB", 00:13:38.781 "uuid": "6fdd27a6-ce80-4a31-918f-4f5800f866bb" 00:13:38.781 } 00:13:38.781 ] 00:13:38.781 }, 00:13:38.781 { 00:13:38.781 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:38.781 "subtype": "NVMe", 00:13:38.781 "listen_addresses": [ 00:13:38.781 { 00:13:38.781 "trtype": "VFIOUSER", 00:13:38.781 "adrfam": "IPv4", 00:13:38.781 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:38.781 "trsvcid": "0" 00:13:38.781 } 00:13:38.781 ], 00:13:38.781 "allow_any_host": true, 00:13:38.781 "hosts": [], 00:13:38.781 "serial_number": "SPDK2", 00:13:38.781 "model_number": "SPDK bdev Controller", 00:13:38.781 "max_namespaces": 32, 00:13:38.781 "min_cntlid": 1, 00:13:38.781 "max_cntlid": 65519, 00:13:38.781 "namespaces": [ 00:13:38.781 { 00:13:38.781 "nsid": 1, 00:13:38.781 "bdev_name": "Malloc2", 00:13:38.781 "name": "Malloc2", 00:13:38.781 "nguid": "448FEA17E58E4641BA2474A639C069D4", 00:13:38.781 "uuid": "448fea17-e58e-4641-ba24-74a639c069d4" 00:13:38.781 } 00:13:38.781 ] 00:13:38.781 } 00:13:38.781 ] 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1247490 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:38.781 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:38.781 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.042 Malloc4 00:13:39.042 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:39.042 [2024-07-15 20:27:31.204883] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:39.042 [2024-07-15 20:27:31.351835] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:39.042 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:39.042 Asynchronous Event Request test 00:13:39.042 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.042 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:39.042 Registering asynchronous event callbacks... 00:13:39.042 Starting namespace attribute notice tests for all controllers... 00:13:39.042 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:39.042 aer_cb - Changed Namespace 00:13:39.042 Cleaning up... 00:13:39.303 [ 00:13:39.303 { 00:13:39.303 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:39.303 "subtype": "Discovery", 00:13:39.303 "listen_addresses": [], 00:13:39.303 "allow_any_host": true, 00:13:39.303 "hosts": [] 00:13:39.303 }, 00:13:39.303 { 00:13:39.303 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:39.303 "subtype": "NVMe", 00:13:39.303 "listen_addresses": [ 00:13:39.303 { 00:13:39.303 "trtype": "VFIOUSER", 00:13:39.303 "adrfam": "IPv4", 00:13:39.303 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:39.303 "trsvcid": "0" 00:13:39.303 } 00:13:39.303 ], 00:13:39.303 "allow_any_host": true, 00:13:39.303 "hosts": [], 00:13:39.303 "serial_number": "SPDK1", 00:13:39.303 "model_number": "SPDK bdev Controller", 00:13:39.303 "max_namespaces": 32, 00:13:39.303 "min_cntlid": 1, 00:13:39.303 "max_cntlid": 65519, 00:13:39.303 "namespaces": [ 00:13:39.303 { 00:13:39.303 "nsid": 1, 00:13:39.303 "bdev_name": "Malloc1", 00:13:39.303 "name": "Malloc1", 00:13:39.303 "nguid": "88A765369D2A4256B8D84771710F8FA0", 00:13:39.303 "uuid": "88a76536-9d2a-4256-b8d8-4771710f8fa0" 00:13:39.303 }, 00:13:39.303 { 00:13:39.303 "nsid": 2, 00:13:39.303 "bdev_name": "Malloc3", 00:13:39.303 "name": "Malloc3", 00:13:39.303 "nguid": "6FDD27A6CE804A31918F4F5800F866BB", 00:13:39.303 "uuid": "6fdd27a6-ce80-4a31-918f-4f5800f866bb" 00:13:39.303 } 00:13:39.303 ] 00:13:39.303 }, 00:13:39.303 { 00:13:39.303 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:39.303 "subtype": "NVMe", 00:13:39.303 "listen_addresses": [ 00:13:39.303 { 00:13:39.303 "trtype": "VFIOUSER", 00:13:39.303 "adrfam": "IPv4", 00:13:39.303 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:39.303 "trsvcid": "0" 00:13:39.303 } 00:13:39.303 ], 00:13:39.303 "allow_any_host": true, 00:13:39.303 "hosts": [], 00:13:39.303 "serial_number": "SPDK2", 00:13:39.303 "model_number": "SPDK bdev Controller", 00:13:39.303 "max_namespaces": 32, 00:13:39.303 "min_cntlid": 1, 00:13:39.303 "max_cntlid": 65519, 00:13:39.303 "namespaces": [ 00:13:39.303 { 00:13:39.303 "nsid": 1, 00:13:39.303 "bdev_name": "Malloc2", 00:13:39.303 "name": "Malloc2", 00:13:39.303 "nguid": "448FEA17E58E4641BA2474A639C069D4", 00:13:39.303 "uuid": "448fea17-e58e-4641-ba24-74a639c069d4" 00:13:39.303 }, 00:13:39.303 { 00:13:39.303 "nsid": 2, 00:13:39.303 "bdev_name": "Malloc4", 00:13:39.303 "name": "Malloc4", 00:13:39.303 "nguid": "22949863E09A40B5B696B025FE63F1E0", 00:13:39.303 "uuid": "22949863-e09a-40b5-b696-b025fe63f1e0" 00:13:39.303 } 00:13:39.303 ] 00:13:39.303 } 00:13:39.303 ] 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1247490 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1237850 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1237850 ']' 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1237850 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1237850 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1237850' 00:13:39.303 killing process with pid 1237850 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1237850 00:13:39.303 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1237850 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1247511 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1247511' 00:13:39.565 Process pid: 1247511 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1247511 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 1247511 ']' 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:39.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.565 20:27:31 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:39.565 [2024-07-15 20:27:31.829986] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:39.565 [2024-07-15 20:27:31.830904] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:13:39.565 [2024-07-15 20:27:31.830943] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:39.565 EAL: No free 2048 kB hugepages reported on node 1 00:13:39.565 [2024-07-15 20:27:31.897766] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:39.827 [2024-07-15 20:27:31.961876] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:39.827 [2024-07-15 20:27:31.961917] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:39.827 [2024-07-15 20:27:31.961924] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:39.827 [2024-07-15 20:27:31.961931] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:39.827 [2024-07-15 20:27:31.961937] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:39.827 [2024-07-15 20:27:31.962011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:39.827 [2024-07-15 20:27:31.962141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:39.827 [2024-07-15 20:27:31.962287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:39.827 [2024-07-15 20:27:31.962287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:39.827 [2024-07-15 20:27:32.031539] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:39.827 [2024-07-15 20:27:32.031575] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:39.827 [2024-07-15 20:27:32.032582] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:39.827 [2024-07-15 20:27:32.032938] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:39.827 [2024-07-15 20:27:32.033039] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:40.398 20:27:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.398 20:27:32 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:13:40.398 20:27:32 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:41.402 20:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:41.402 20:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:41.402 20:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:41.402 20:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:41.402 20:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:41.402 20:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:41.662 Malloc1 00:13:41.662 20:27:33 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:41.922 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:41.922 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:42.182 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:42.182 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:42.182 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:42.441 Malloc2 00:13:42.441 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:42.441 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:42.702 20:27:34 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1247511 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 1247511 ']' 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 1247511 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1247511 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1247511' 00:13:42.963 killing process with pid 1247511 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 1247511 00:13:42.963 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 1247511 00:13:43.225 20:27:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:43.225 20:27:35 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:43.225 00:13:43.225 real 0m50.589s 00:13:43.225 user 3m20.455s 00:13:43.225 sys 0m3.073s 00:13:43.225 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:43.226 ************************************ 00:13:43.226 END TEST nvmf_vfio_user 00:13:43.226 ************************************ 00:13:43.226 20:27:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:43.226 20:27:35 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:43.226 20:27:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:43.226 20:27:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:43.226 20:27:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:43.226 ************************************ 00:13:43.226 START TEST nvmf_vfio_user_nvme_compliance 00:13:43.226 ************************************ 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:43.226 * Looking for test storage... 00:13:43.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1248318 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1248318' 00:13:43.226 Process pid: 1248318 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1248318 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 1248318 ']' 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:43.226 20:27:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:43.488 [2024-07-15 20:27:35.627462] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:13:43.489 [2024-07-15 20:27:35.627543] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.489 EAL: No free 2048 kB hugepages reported on node 1 00:13:43.489 [2024-07-15 20:27:35.705414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:43.489 [2024-07-15 20:27:35.779626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.489 [2024-07-15 20:27:35.779667] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.489 [2024-07-15 20:27:35.779675] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.489 [2024-07-15 20:27:35.779682] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.489 [2024-07-15 20:27:35.779688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.489 [2024-07-15 20:27:35.779890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.489 [2024-07-15 20:27:35.779903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.489 [2024-07-15 20:27:35.779906] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:44.060 20:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:44.060 20:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:13:44.060 20:27:36 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.445 malloc0 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:45.445 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:45.446 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:45.446 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:45.446 20:27:37 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:45.446 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.446 00:13:45.446 00:13:45.446 CUnit - A unit testing framework for C - Version 2.1-3 00:13:45.446 http://cunit.sourceforge.net/ 00:13:45.446 00:13:45.446 00:13:45.446 Suite: nvme_compliance 00:13:45.446 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-15 20:27:37.667645] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.446 [2024-07-15 20:27:37.668987] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:45.446 [2024-07-15 20:27:37.668997] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:45.446 [2024-07-15 20:27:37.669002] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:45.446 [2024-07-15 20:27:37.670665] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.446 passed 00:13:45.446 Test: admin_identify_ctrlr_verify_fused ...[2024-07-15 20:27:37.766227] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.446 [2024-07-15 20:27:37.769244] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.446 passed 00:13:45.705 Test: admin_identify_ns ...[2024-07-15 20:27:37.865415] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.705 [2024-07-15 20:27:37.925238] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:45.705 [2024-07-15 20:27:37.933240] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:45.705 [2024-07-15 20:27:37.954342] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.705 passed 00:13:45.705 Test: admin_get_features_mandatory_features ...[2024-07-15 20:27:38.047321] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.705 [2024-07-15 20:27:38.051337] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.963 passed 00:13:45.963 Test: admin_get_features_optional_features ...[2024-07-15 20:27:38.143863] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:45.963 [2024-07-15 20:27:38.146882] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:45.963 passed 00:13:45.963 Test: admin_set_features_number_of_queues ...[2024-07-15 20:27:38.239991] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.223 [2024-07-15 20:27:38.344329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.223 passed 00:13:46.223 Test: admin_get_log_page_mandatory_logs ...[2024-07-15 20:27:38.437953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.223 [2024-07-15 20:27:38.440980] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.223 passed 00:13:46.223 Test: admin_get_log_page_with_lpo ...[2024-07-15 20:27:38.534471] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.223 [2024-07-15 20:27:38.602242] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:46.483 [2024-07-15 20:27:38.615297] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.483 passed 00:13:46.483 Test: fabric_property_get ...[2024-07-15 20:27:38.709327] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.483 [2024-07-15 20:27:38.710570] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:46.484 [2024-07-15 20:27:38.712346] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.484 passed 00:13:46.484 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-15 20:27:38.806866] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.484 [2024-07-15 20:27:38.808119] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:46.484 [2024-07-15 20:27:38.809888] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.484 passed 00:13:46.744 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-15 20:27:38.902028] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.744 [2024-07-15 20:27:38.985242] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.744 [2024-07-15 20:27:39.001237] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:46.744 [2024-07-15 20:27:39.006330] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:46.744 passed 00:13:46.744 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-15 20:27:39.099953] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:46.744 [2024-07-15 20:27:39.101193] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:46.744 [2024-07-15 20:27:39.102973] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.005 passed 00:13:47.005 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-15 20:27:39.196488] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.005 [2024-07-15 20:27:39.272238] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:47.005 [2024-07-15 20:27:39.296246] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:47.005 [2024-07-15 20:27:39.301329] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.005 passed 00:13:47.266 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-15 20:27:39.393948] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.266 [2024-07-15 20:27:39.395187] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:47.266 [2024-07-15 20:27:39.395206] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:47.266 [2024-07-15 20:27:39.396969] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.266 passed 00:13:47.266 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-15 20:27:39.488486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.266 [2024-07-15 20:27:39.580249] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:47.266 [2024-07-15 20:27:39.588239] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:47.266 [2024-07-15 20:27:39.596239] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:47.266 [2024-07-15 20:27:39.604239] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:47.266 [2024-07-15 20:27:39.633335] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.528 passed 00:13:47.528 Test: admin_create_io_sq_verify_pc ...[2024-07-15 20:27:39.727925] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:47.528 [2024-07-15 20:27:39.744243] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:47.528 [2024-07-15 20:27:39.762081] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:47.528 passed 00:13:47.528 Test: admin_create_io_qp_max_qps ...[2024-07-15 20:27:39.855616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:48.911 [2024-07-15 20:27:40.953240] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:49.172 [2024-07-15 20:27:41.332126] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:49.172 passed 00:13:49.172 Test: admin_create_io_sq_shared_cq ...[2024-07-15 20:27:41.424485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:49.432 [2024-07-15 20:27:41.560237] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:49.432 [2024-07-15 20:27:41.597286] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:49.432 passed 00:13:49.432 00:13:49.432 Run Summary: Type Total Ran Passed Failed Inactive 00:13:49.432 suites 1 1 n/a 0 0 00:13:49.432 tests 18 18 18 0 0 00:13:49.432 asserts 360 360 360 0 n/a 00:13:49.432 00:13:49.432 Elapsed time = 1.645 seconds 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1248318 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 1248318 ']' 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 1248318 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1248318 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1248318' 00:13:49.432 killing process with pid 1248318 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 1248318 00:13:49.432 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 1248318 00:13:49.693 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:49.693 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:49.693 00:13:49.693 real 0m6.408s 00:13:49.693 user 0m18.305s 00:13:49.693 sys 0m0.482s 00:13:49.693 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:49.693 20:27:41 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:49.693 ************************************ 00:13:49.693 END TEST nvmf_vfio_user_nvme_compliance 00:13:49.693 ************************************ 00:13:49.693 20:27:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:49.693 20:27:41 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.693 20:27:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:49.693 20:27:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:49.693 20:27:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:49.693 ************************************ 00:13:49.693 START TEST nvmf_vfio_user_fuzz 00:13:49.693 ************************************ 00:13:49.693 20:27:41 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:49.693 * Looking for test storage... 00:13:49.693 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:49.693 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1249658 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1249658' 00:13:49.694 Process pid: 1249658 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1249658 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 1249658 ']' 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:49.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:49.694 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:50.636 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:50.636 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:13:50.636 20:27:42 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.578 malloc0 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:51.578 20:27:43 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:23.672 Fuzzing completed. Shutting down the fuzz application 00:14:23.672 00:14:23.672 Dumping successful admin opcodes: 00:14:23.672 8, 9, 10, 24, 00:14:23.672 Dumping successful io opcodes: 00:14:23.672 0, 00:14:23.672 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1149075, total successful commands: 4525, random_seed: 3787045120 00:14:23.672 NS: 0x200003a1ef00 admin qp, Total commands completed: 144615, total successful commands: 1174, random_seed: 1511903552 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1249658 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 1249658 ']' 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 1249658 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1249658 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1249658' 00:14:23.672 killing process with pid 1249658 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 1249658 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 1249658 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:23.672 00:14:23.672 real 0m33.680s 00:14:23.672 user 0m37.810s 00:14:23.672 sys 0m26.667s 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:23.672 20:28:15 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:23.672 ************************************ 00:14:23.672 END TEST nvmf_vfio_user_fuzz 00:14:23.672 ************************************ 00:14:23.672 20:28:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:23.672 20:28:15 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:23.672 20:28:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:23.672 20:28:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:23.672 20:28:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:23.672 ************************************ 00:14:23.672 START TEST nvmf_host_management 00:14:23.672 ************************************ 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:14:23.672 * Looking for test storage... 00:14:23.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.672 20:28:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:14:23.673 20:28:15 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:31.823 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:31.823 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.823 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:31.824 Found net devices under 0000:31:00.0: cvl_0_0 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:31.824 Found net devices under 0000:31:00.1: cvl_0_1 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:31.824 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:31.824 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.478 ms 00:14:31.824 00:14:31.824 --- 10.0.0.2 ping statistics --- 00:14:31.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.824 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:31.824 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:31.824 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.233 ms 00:14:31.824 00:14:31.824 --- 10.0.0.1 ping statistics --- 00:14:31.824 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:31.824 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:31.824 20:28:23 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=1260449 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 1260449 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1260449 ']' 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:31.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:31.824 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:31.824 [2024-07-15 20:28:24.089687] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:31.824 [2024-07-15 20:28:24.089760] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:31.824 EAL: No free 2048 kB hugepages reported on node 1 00:14:31.824 [2024-07-15 20:28:24.188544] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:32.084 [2024-07-15 20:28:24.286363] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:32.084 [2024-07-15 20:28:24.286426] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:32.084 [2024-07-15 20:28:24.286435] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:32.084 [2024-07-15 20:28:24.286442] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:32.084 [2024-07-15 20:28:24.286448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:32.084 [2024-07-15 20:28:24.286596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:32.084 [2024-07-15 20:28:24.286764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:32.084 [2024-07-15 20:28:24.286931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:32.084 [2024-07-15 20:28:24.286931] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:32.654 [2024-07-15 20:28:24.911715] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:32.654 Malloc0 00:14:32.654 [2024-07-15 20:28:24.974953] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:32.654 20:28:24 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1260690 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1260690 /var/tmp/bdevperf.sock 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 1260690 ']' 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:14:32.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:32.654 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:32.654 { 00:14:32.654 "params": { 00:14:32.654 "name": "Nvme$subsystem", 00:14:32.654 "trtype": "$TEST_TRANSPORT", 00:14:32.654 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:32.654 "adrfam": "ipv4", 00:14:32.654 "trsvcid": "$NVMF_PORT", 00:14:32.654 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:32.654 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:32.654 "hdgst": ${hdgst:-false}, 00:14:32.654 "ddgst": ${ddgst:-false} 00:14:32.654 }, 00:14:32.654 "method": "bdev_nvme_attach_controller" 00:14:32.654 } 00:14:32.654 EOF 00:14:32.654 )") 00:14:32.914 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:32.914 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:32.914 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:32.914 20:28:25 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:32.914 "params": { 00:14:32.914 "name": "Nvme0", 00:14:32.914 "trtype": "tcp", 00:14:32.914 "traddr": "10.0.0.2", 00:14:32.914 "adrfam": "ipv4", 00:14:32.914 "trsvcid": "4420", 00:14:32.914 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:32.914 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:32.914 "hdgst": false, 00:14:32.914 "ddgst": false 00:14:32.914 }, 00:14:32.914 "method": "bdev_nvme_attach_controller" 00:14:32.914 }' 00:14:32.914 [2024-07-15 20:28:25.084153] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:32.914 [2024-07-15 20:28:25.084216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1260690 ] 00:14:32.914 EAL: No free 2048 kB hugepages reported on node 1 00:14:32.914 [2024-07-15 20:28:25.151602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:32.914 [2024-07-15 20:28:25.216041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.173 Running I/O for 10 seconds... 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:14:33.743 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=705 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 705 -ge 100 ']' 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:33.744 [2024-07-15 20:28:25.913782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248ae20 is same with the state(5) to be set 00:14:33.744 [2024-07-15 20:28:25.913859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x248ae20 is same with the state(5) to be set 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:33.744 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:33.744 [2024-07-15 20:28:25.926482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.744 [2024-07-15 20:28:25.926527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.744 [2024-07-15 20:28:25.926545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.744 [2024-07-15 20:28:25.926560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:14:33.744 [2024-07-15 20:28:25.926575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff3540 is same with the state(5) to be set 00:14:33.744 [2024-07-15 20:28:25.926627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.926986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.926993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.927002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.927009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.927018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.927025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.927034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.744 [2024-07-15 20:28:25.927041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.744 [2024-07-15 20:28:25.927049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:14:33.745 [2024-07-15 20:28:25.927694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:33.745 [2024-07-15 20:28:25.927746] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1404340 was disconnected and freed. reset controller. 00:14:33.745 [2024-07-15 20:28:25.929039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:14:33.745 20:28:25 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:33.745 task offset: 98304 on job bdev=Nvme0n1 fails 00:14:33.745 00:14:33.745 Latency(us) 00:14:33.745 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:33.745 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:33.745 Job: Nvme0n1 ended in about 0.53 seconds with error 00:14:33.745 Verification LBA range: start 0x0 length 0x400 00:14:33.745 Nvme0n1 : 0.53 1460.53 91.28 121.71 0.00 39434.22 1815.89 33204.91 00:14:33.745 =================================================================================================================== 00:14:33.746 Total : 1460.53 91.28 121.71 0.00 39434.22 1815.89 33204.91 00:14:33.746 [2024-07-15 20:28:25.931011] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:14:33.746 [2024-07-15 20:28:25.931031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xff3540 (9): Bad file descriptor 00:14:33.746 20:28:25 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:14:33.746 [2024-07-15 20:28:25.983834] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1260690 00:14:34.684 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1260690) - No such process 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:14:34.684 { 00:14:34.684 "params": { 00:14:34.684 "name": "Nvme$subsystem", 00:14:34.684 "trtype": "$TEST_TRANSPORT", 00:14:34.684 "traddr": "$NVMF_FIRST_TARGET_IP", 00:14:34.684 "adrfam": "ipv4", 00:14:34.684 "trsvcid": "$NVMF_PORT", 00:14:34.684 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:14:34.684 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:14:34.684 "hdgst": ${hdgst:-false}, 00:14:34.684 "ddgst": ${ddgst:-false} 00:14:34.684 }, 00:14:34.684 "method": "bdev_nvme_attach_controller" 00:14:34.684 } 00:14:34.684 EOF 00:14:34.684 )") 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:14:34.684 20:28:26 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:14:34.684 "params": { 00:14:34.684 "name": "Nvme0", 00:14:34.684 "trtype": "tcp", 00:14:34.684 "traddr": "10.0.0.2", 00:14:34.684 "adrfam": "ipv4", 00:14:34.684 "trsvcid": "4420", 00:14:34.684 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:14:34.684 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:14:34.684 "hdgst": false, 00:14:34.684 "ddgst": false 00:14:34.684 }, 00:14:34.684 "method": "bdev_nvme_attach_controller" 00:14:34.684 }' 00:14:34.684 [2024-07-15 20:28:26.997629] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:34.684 [2024-07-15 20:28:26.997682] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1261043 ] 00:14:34.684 EAL: No free 2048 kB hugepages reported on node 1 00:14:34.684 [2024-07-15 20:28:27.063276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:34.945 [2024-07-15 20:28:27.127610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.206 Running I/O for 1 seconds... 00:14:36.157 00:14:36.157 Latency(us) 00:14:36.157 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:36.157 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:14:36.157 Verification LBA range: start 0x0 length 0x400 00:14:36.157 Nvme0n1 : 1.01 1459.77 91.24 0.00 0.00 43116.37 7864.32 35170.99 00:14:36.157 =================================================================================================================== 00:14:36.157 Total : 1459.77 91.24 0.00 0.00 43116.37 7864.32 35170.99 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:36.420 rmmod nvme_tcp 00:14:36.420 rmmod nvme_fabrics 00:14:36.420 rmmod nvme_keyring 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 1260449 ']' 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 1260449 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 1260449 ']' 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 1260449 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1260449 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1260449' 00:14:36.420 killing process with pid 1260449 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 1260449 00:14:36.420 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 1260449 00:14:36.682 [2024-07-15 20:28:28.822766] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:36.682 20:28:28 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.594 20:28:30 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:38.594 20:28:30 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:14:38.594 00:14:38.594 real 0m15.259s 00:14:38.594 user 0m23.120s 00:14:38.594 sys 0m7.124s 00:14:38.594 20:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:38.594 20:28:30 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:14:38.594 ************************************ 00:14:38.594 END TEST nvmf_host_management 00:14:38.594 ************************************ 00:14:38.594 20:28:30 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:38.594 20:28:30 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:38.594 20:28:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:38.594 20:28:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:38.594 20:28:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:38.860 ************************************ 00:14:38.860 START TEST nvmf_lvol 00:14:38.860 ************************************ 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:14:38.860 * Looking for test storage... 00:14:38.860 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:14:38.860 20:28:31 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:47.052 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:14:47.053 Found 0000:31:00.0 (0x8086 - 0x159b) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:14:47.053 Found 0000:31:00.1 (0x8086 - 0x159b) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:14:47.053 Found net devices under 0000:31:00.0: cvl_0_0 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:14:47.053 Found net devices under 0000:31:00.1: cvl_0_1 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:47.053 20:28:38 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:47.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:47.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.695 ms 00:14:47.053 00:14:47.053 --- 10.0.0.2 ping statistics --- 00:14:47.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.053 rtt min/avg/max/mdev = 0.695/0.695/0.695/0.000 ms 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:47.053 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:47.053 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:14:47.053 00:14:47.053 --- 10.0.0.1 ping statistics --- 00:14:47.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:47.053 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=1266064 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 1266064 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 1266064 ']' 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:47.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.053 20:28:39 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.053 [2024-07-15 20:28:39.288140] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:14:47.053 [2024-07-15 20:28:39.288190] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:47.053 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.053 [2024-07-15 20:28:39.360440] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:47.053 [2024-07-15 20:28:39.425434] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:47.053 [2024-07-15 20:28:39.425473] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:47.053 [2024-07-15 20:28:39.425481] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:47.053 [2024-07-15 20:28:39.425487] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:47.053 [2024-07-15 20:28:39.425493] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:47.053 [2024-07-15 20:28:39.425636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.053 [2024-07-15 20:28:39.425749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:47.053 [2024-07-15 20:28:39.425752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:47.995 [2024-07-15 20:28:40.257888] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:47.995 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:48.255 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:14:48.255 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:14:48.515 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:14:48.515 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:14:48.515 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:14:48.783 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=8b05f062-b9e8-4c8d-b096-cf0925ec3fcb 00:14:48.783 20:28:40 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 8b05f062-b9e8-4c8d-b096-cf0925ec3fcb lvol 20 00:14:49.043 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=7b3b0604-1122-40aa-95f8-615193e0fdaf 00:14:49.043 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:14:49.043 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7b3b0604-1122-40aa-95f8-615193e0fdaf 00:14:49.303 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:14:49.303 [2024-07-15 20:28:41.631861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:49.303 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:49.563 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1266697 00:14:49.563 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:14:49.563 20:28:41 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:14:49.563 EAL: No free 2048 kB hugepages reported on node 1 00:14:50.502 20:28:42 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 7b3b0604-1122-40aa-95f8-615193e0fdaf MY_SNAPSHOT 00:14:50.764 20:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a497f1b1-23aa-4ab0-9f23-b3816c44f491 00:14:50.764 20:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 7b3b0604-1122-40aa-95f8-615193e0fdaf 30 00:14:51.024 20:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a497f1b1-23aa-4ab0-9f23-b3816c44f491 MY_CLONE 00:14:51.284 20:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=0ce1e769-eba6-488a-9173-04d46825c24c 00:14:51.284 20:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 0ce1e769-eba6-488a-9173-04d46825c24c 00:14:51.545 20:28:43 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1266697 00:15:01.548 Initializing NVMe Controllers 00:15:01.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:15:01.548 Controller IO queue size 128, less than required. 00:15:01.548 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:15:01.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:15:01.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:15:01.548 Initialization complete. Launching workers. 00:15:01.548 ======================================================== 00:15:01.548 Latency(us) 00:15:01.548 Device Information : IOPS MiB/s Average min max 00:15:01.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12481.10 48.75 10260.68 1546.56 56383.63 00:15:01.548 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 18098.60 70.70 7074.30 795.38 51779.45 00:15:01.548 ======================================================== 00:15:01.548 Total : 30579.70 119.45 8374.82 795.38 56383.63 00:15:01.548 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7b3b0604-1122-40aa-95f8-615193e0fdaf 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8b05f062-b9e8-4c8d-b096-cf0925ec3fcb 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:01.548 rmmod nvme_tcp 00:15:01.548 rmmod nvme_fabrics 00:15:01.548 rmmod nvme_keyring 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 1266064 ']' 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 1266064 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 1266064 ']' 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 1266064 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1266064 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1266064' 00:15:01.548 killing process with pid 1266064 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 1266064 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 1266064 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:01.548 20:28:52 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.934 20:28:54 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:02.934 00:15:02.934 real 0m23.940s 00:15:02.934 user 1m3.688s 00:15:02.934 sys 0m8.312s 00:15:02.934 20:28:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:02.934 20:28:54 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:15:02.934 ************************************ 00:15:02.934 END TEST nvmf_lvol 00:15:02.934 ************************************ 00:15:02.934 20:28:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:02.934 20:28:54 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:02.934 20:28:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:02.934 20:28:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:02.934 20:28:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:02.934 ************************************ 00:15:02.934 START TEST nvmf_lvs_grow 00:15:02.934 ************************************ 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:15:02.934 * Looking for test storage... 00:15:02.934 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:02.934 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:15:02.935 20:28:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:11.095 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:11.095 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:11.095 Found net devices under 0000:31:00.0: cvl_0_0 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:11.095 Found net devices under 0000:31:00.1: cvl_0_1 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:11.095 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:11.095 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.594 ms 00:15:11.095 00:15:11.095 --- 10.0.0.2 ping statistics --- 00:15:11.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.095 rtt min/avg/max/mdev = 0.594/0.594/0.594/0.000 ms 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:11.095 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:11.095 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.271 ms 00:15:11.095 00:15:11.095 --- 10.0.0.1 ping statistics --- 00:15:11.095 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:11.095 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:11.095 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=1273461 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 1273461 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 1273461 ']' 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:11.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:11.096 20:29:03 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.096 [2024-07-15 20:29:03.420350] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:11.096 [2024-07-15 20:29:03.420401] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:11.096 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.359 [2024-07-15 20:29:03.493429] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:11.359 [2024-07-15 20:29:03.557338] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:11.359 [2024-07-15 20:29:03.557374] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:11.359 [2024-07-15 20:29:03.557381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:11.360 [2024-07-15 20:29:03.557388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:11.360 [2024-07-15 20:29:03.557393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:11.360 [2024-07-15 20:29:03.557419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:11.931 20:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:11.931 20:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:15:11.931 20:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:11.931 20:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:11.932 20:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:11.932 20:29:04 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:11.932 20:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:12.192 [2024-07-15 20:29:04.368360] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:12.192 ************************************ 00:15:12.192 START TEST lvs_grow_clean 00:15:12.192 ************************************ 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.192 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:12.453 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:12.453 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:12.453 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:12.454 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:12.454 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:12.715 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:12.715 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:12.715 20:29:04 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u db6c80bc-d3ee-48f8-a960-d396e51c061a lvol 150 00:15:12.975 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=ca3eb2d4-2813-454a-9247-c9dc617fb56a 00:15:12.975 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:12.976 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:12.976 [2024-07-15 20:29:05.246314] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:12.976 [2024-07-15 20:29:05.246370] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:12.976 true 00:15:12.976 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:12.976 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:13.236 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:13.236 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:13.236 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ca3eb2d4-2813-454a-9247-c9dc617fb56a 00:15:13.496 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:13.496 [2024-07-15 20:29:05.836117] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:13.496 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1274129 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1274129 /var/tmp/bdevperf.sock 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 1274129 ']' 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:13.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:13.757 20:29:05 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:13.757 [2024-07-15 20:29:06.039383] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:13.757 [2024-07-15 20:29:06.039437] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1274129 ] 00:15:13.757 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.757 [2024-07-15 20:29:06.120529] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:14.018 [2024-07-15 20:29:06.184810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.587 20:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:14.587 20:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:15:14.587 20:29:06 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:14.846 Nvme0n1 00:15:14.846 20:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:15.106 [ 00:15:15.106 { 00:15:15.106 "name": "Nvme0n1", 00:15:15.106 "aliases": [ 00:15:15.106 "ca3eb2d4-2813-454a-9247-c9dc617fb56a" 00:15:15.106 ], 00:15:15.106 "product_name": "NVMe disk", 00:15:15.106 "block_size": 4096, 00:15:15.106 "num_blocks": 38912, 00:15:15.106 "uuid": "ca3eb2d4-2813-454a-9247-c9dc617fb56a", 00:15:15.106 "assigned_rate_limits": { 00:15:15.106 "rw_ios_per_sec": 0, 00:15:15.106 "rw_mbytes_per_sec": 0, 00:15:15.106 "r_mbytes_per_sec": 0, 00:15:15.106 "w_mbytes_per_sec": 0 00:15:15.106 }, 00:15:15.106 "claimed": false, 00:15:15.106 "zoned": false, 00:15:15.106 "supported_io_types": { 00:15:15.106 "read": true, 00:15:15.106 "write": true, 00:15:15.106 "unmap": true, 00:15:15.106 "flush": true, 00:15:15.106 "reset": true, 00:15:15.106 "nvme_admin": true, 00:15:15.106 "nvme_io": true, 00:15:15.106 "nvme_io_md": false, 00:15:15.106 "write_zeroes": true, 00:15:15.106 "zcopy": false, 00:15:15.106 "get_zone_info": false, 00:15:15.106 "zone_management": false, 00:15:15.106 "zone_append": false, 00:15:15.106 "compare": true, 00:15:15.106 "compare_and_write": true, 00:15:15.106 "abort": true, 00:15:15.106 "seek_hole": false, 00:15:15.106 "seek_data": false, 00:15:15.106 "copy": true, 00:15:15.106 "nvme_iov_md": false 00:15:15.106 }, 00:15:15.106 "memory_domains": [ 00:15:15.106 { 00:15:15.106 "dma_device_id": "system", 00:15:15.106 "dma_device_type": 1 00:15:15.106 } 00:15:15.106 ], 00:15:15.106 "driver_specific": { 00:15:15.106 "nvme": [ 00:15:15.106 { 00:15:15.106 "trid": { 00:15:15.106 "trtype": "TCP", 00:15:15.106 "adrfam": "IPv4", 00:15:15.106 "traddr": "10.0.0.2", 00:15:15.106 "trsvcid": "4420", 00:15:15.106 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:15.106 }, 00:15:15.106 "ctrlr_data": { 00:15:15.106 "cntlid": 1, 00:15:15.106 "vendor_id": "0x8086", 00:15:15.106 "model_number": "SPDK bdev Controller", 00:15:15.106 "serial_number": "SPDK0", 00:15:15.106 "firmware_revision": "24.09", 00:15:15.106 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:15.106 "oacs": { 00:15:15.106 "security": 0, 00:15:15.106 "format": 0, 00:15:15.106 "firmware": 0, 00:15:15.106 "ns_manage": 0 00:15:15.106 }, 00:15:15.106 "multi_ctrlr": true, 00:15:15.106 "ana_reporting": false 00:15:15.106 }, 00:15:15.106 "vs": { 00:15:15.106 "nvme_version": "1.3" 00:15:15.106 }, 00:15:15.106 "ns_data": { 00:15:15.106 "id": 1, 00:15:15.106 "can_share": true 00:15:15.106 } 00:15:15.106 } 00:15:15.106 ], 00:15:15.106 "mp_policy": "active_passive" 00:15:15.106 } 00:15:15.106 } 00:15:15.106 ] 00:15:15.106 20:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1274299 00:15:15.106 20:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:15.106 20:29:07 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:15.106 Running I/O for 10 seconds... 00:15:16.483 Latency(us) 00:15:16.483 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:16.483 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:16.483 Nvme0n1 : 1.00 17987.00 70.26 0.00 0.00 0.00 0.00 0.00 00:15:16.483 =================================================================================================================== 00:15:16.483 Total : 17987.00 70.26 0.00 0.00 0.00 0.00 0.00 00:15:16.483 00:15:17.052 20:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:17.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:17.312 Nvme0n1 : 2.00 18110.50 70.74 0.00 0.00 0.00 0.00 0.00 00:15:17.312 =================================================================================================================== 00:15:17.312 Total : 18110.50 70.74 0.00 0.00 0.00 0.00 0.00 00:15:17.312 00:15:17.312 true 00:15:17.312 20:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:17.312 20:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:17.572 20:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:17.572 20:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:17.572 20:29:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1274299 00:15:18.144 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:18.144 Nvme0n1 : 3.00 18174.67 70.99 0.00 0.00 0.00 0.00 0.00 00:15:18.144 =================================================================================================================== 00:15:18.144 Total : 18174.67 70.99 0.00 0.00 0.00 0.00 0.00 00:15:18.144 00:15:19.086 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:19.086 Nvme0n1 : 4.00 18206.00 71.12 0.00 0.00 0.00 0.00 0.00 00:15:19.086 =================================================================================================================== 00:15:19.086 Total : 18206.00 71.12 0.00 0.00 0.00 0.00 0.00 00:15:19.086 00:15:20.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:20.470 Nvme0n1 : 5.00 18238.40 71.24 0.00 0.00 0.00 0.00 0.00 00:15:20.470 =================================================================================================================== 00:15:20.470 Total : 18238.40 71.24 0.00 0.00 0.00 0.00 0.00 00:15:20.470 00:15:21.410 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:21.410 Nvme0n1 : 6.00 18259.50 71.33 0.00 0.00 0.00 0.00 0.00 00:15:21.410 =================================================================================================================== 00:15:21.410 Total : 18259.50 71.33 0.00 0.00 0.00 0.00 0.00 00:15:21.410 00:15:22.401 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:22.401 Nvme0n1 : 7.00 18283.57 71.42 0.00 0.00 0.00 0.00 0.00 00:15:22.401 =================================================================================================================== 00:15:22.401 Total : 18283.57 71.42 0.00 0.00 0.00 0.00 0.00 00:15:22.401 00:15:23.344 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:23.344 Nvme0n1 : 8.00 18301.38 71.49 0.00 0.00 0.00 0.00 0.00 00:15:23.344 =================================================================================================================== 00:15:23.344 Total : 18301.38 71.49 0.00 0.00 0.00 0.00 0.00 00:15:23.344 00:15:24.287 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:24.287 Nvme0n1 : 9.00 18310.44 71.53 0.00 0.00 0.00 0.00 0.00 00:15:24.287 =================================================================================================================== 00:15:24.287 Total : 18310.44 71.53 0.00 0.00 0.00 0.00 0.00 00:15:24.287 00:15:25.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.226 Nvme0n1 : 10.00 18320.80 71.57 0.00 0.00 0.00 0.00 0.00 00:15:25.226 =================================================================================================================== 00:15:25.226 Total : 18320.80 71.57 0.00 0.00 0.00 0.00 0.00 00:15:25.226 00:15:25.226 00:15:25.226 Latency(us) 00:15:25.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.226 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:25.226 Nvme0n1 : 10.00 18326.11 71.59 0.00 0.00 6981.95 4341.76 16930.13 00:15:25.226 =================================================================================================================== 00:15:25.226 Total : 18326.11 71.59 0.00 0.00 6981.95 4341.76 16930.13 00:15:25.226 0 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1274129 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 1274129 ']' 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 1274129 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1274129 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1274129' 00:15:25.226 killing process with pid 1274129 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 1274129 00:15:25.226 Received shutdown signal, test time was about 10.000000 seconds 00:15:25.226 00:15:25.226 Latency(us) 00:15:25.226 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:25.226 =================================================================================================================== 00:15:25.226 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:25.226 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 1274129 00:15:25.498 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:25.498 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:25.758 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:25.758 20:29:17 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:26.018 [2024-07-15 20:29:18.283378] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:26.018 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:26.278 request: 00:15:26.278 { 00:15:26.278 "uuid": "db6c80bc-d3ee-48f8-a960-d396e51c061a", 00:15:26.278 "method": "bdev_lvol_get_lvstores", 00:15:26.278 "req_id": 1 00:15:26.278 } 00:15:26.278 Got JSON-RPC error response 00:15:26.278 response: 00:15:26.278 { 00:15:26.278 "code": -19, 00:15:26.278 "message": "No such device" 00:15:26.278 } 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:26.278 aio_bdev 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ca3eb2d4-2813-454a-9247-c9dc617fb56a 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=ca3eb2d4-2813-454a-9247-c9dc617fb56a 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:26.278 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:26.537 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ca3eb2d4-2813-454a-9247-c9dc617fb56a -t 2000 00:15:26.537 [ 00:15:26.537 { 00:15:26.537 "name": "ca3eb2d4-2813-454a-9247-c9dc617fb56a", 00:15:26.537 "aliases": [ 00:15:26.537 "lvs/lvol" 00:15:26.537 ], 00:15:26.537 "product_name": "Logical Volume", 00:15:26.537 "block_size": 4096, 00:15:26.537 "num_blocks": 38912, 00:15:26.537 "uuid": "ca3eb2d4-2813-454a-9247-c9dc617fb56a", 00:15:26.537 "assigned_rate_limits": { 00:15:26.537 "rw_ios_per_sec": 0, 00:15:26.537 "rw_mbytes_per_sec": 0, 00:15:26.537 "r_mbytes_per_sec": 0, 00:15:26.537 "w_mbytes_per_sec": 0 00:15:26.537 }, 00:15:26.537 "claimed": false, 00:15:26.537 "zoned": false, 00:15:26.537 "supported_io_types": { 00:15:26.537 "read": true, 00:15:26.537 "write": true, 00:15:26.537 "unmap": true, 00:15:26.537 "flush": false, 00:15:26.537 "reset": true, 00:15:26.537 "nvme_admin": false, 00:15:26.537 "nvme_io": false, 00:15:26.537 "nvme_io_md": false, 00:15:26.537 "write_zeroes": true, 00:15:26.538 "zcopy": false, 00:15:26.538 "get_zone_info": false, 00:15:26.538 "zone_management": false, 00:15:26.538 "zone_append": false, 00:15:26.538 "compare": false, 00:15:26.538 "compare_and_write": false, 00:15:26.538 "abort": false, 00:15:26.538 "seek_hole": true, 00:15:26.538 "seek_data": true, 00:15:26.538 "copy": false, 00:15:26.538 "nvme_iov_md": false 00:15:26.538 }, 00:15:26.538 "driver_specific": { 00:15:26.538 "lvol": { 00:15:26.538 "lvol_store_uuid": "db6c80bc-d3ee-48f8-a960-d396e51c061a", 00:15:26.538 "base_bdev": "aio_bdev", 00:15:26.538 "thin_provision": false, 00:15:26.538 "num_allocated_clusters": 38, 00:15:26.538 "snapshot": false, 00:15:26.538 "clone": false, 00:15:26.538 "esnap_clone": false 00:15:26.538 } 00:15:26.538 } 00:15:26.538 } 00:15:26.538 ] 00:15:26.797 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:15:26.797 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:26.797 20:29:18 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:26.797 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:26.797 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:26.797 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:27.056 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:27.056 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ca3eb2d4-2813-454a-9247-c9dc617fb56a 00:15:27.056 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u db6c80bc-d3ee-48f8-a960-d396e51c061a 00:15:27.317 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:27.578 00:15:27.578 real 0m15.302s 00:15:27.578 user 0m15.115s 00:15:27.578 sys 0m1.210s 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:15:27.578 ************************************ 00:15:27.578 END TEST lvs_grow_clean 00:15:27.578 ************************************ 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:27.578 ************************************ 00:15:27.578 START TEST lvs_grow_dirty 00:15:27.578 ************************************ 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:27.578 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:27.839 20:29:19 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:15:27.839 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:15:27.839 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:27.839 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:27.839 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:15:28.099 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:15:28.099 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:15:28.099 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 lvol 150 00:15:28.099 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=ab626df8-2995-4c14-8793-93ff8e1af2f0 00:15:28.099 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:28.099 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:15:28.360 [2024-07-15 20:29:20.600905] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:15:28.360 [2024-07-15 20:29:20.600959] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:15:28.360 true 00:15:28.360 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:28.360 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:15:28.621 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:15:28.621 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:15:28.621 20:29:20 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 ab626df8-2995-4c14-8793-93ff8e1af2f0 00:15:28.883 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:15:28.883 [2024-07-15 20:29:21.206738] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:28.883 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1277110 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1277110 /var/tmp/bdevperf.sock 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1277110 ']' 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:15:29.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:29.144 20:29:21 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:29.144 [2024-07-15 20:29:21.418996] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:29.144 [2024-07-15 20:29:21.419046] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1277110 ] 00:15:29.144 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.144 [2024-07-15 20:29:21.498217] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:29.405 [2024-07-15 20:29:21.552166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.987 20:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:29.987 20:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:29.987 20:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:15:30.251 Nvme0n1 00:15:30.251 20:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:15:30.251 [ 00:15:30.251 { 00:15:30.251 "name": "Nvme0n1", 00:15:30.251 "aliases": [ 00:15:30.251 "ab626df8-2995-4c14-8793-93ff8e1af2f0" 00:15:30.251 ], 00:15:30.251 "product_name": "NVMe disk", 00:15:30.251 "block_size": 4096, 00:15:30.251 "num_blocks": 38912, 00:15:30.251 "uuid": "ab626df8-2995-4c14-8793-93ff8e1af2f0", 00:15:30.251 "assigned_rate_limits": { 00:15:30.251 "rw_ios_per_sec": 0, 00:15:30.251 "rw_mbytes_per_sec": 0, 00:15:30.251 "r_mbytes_per_sec": 0, 00:15:30.251 "w_mbytes_per_sec": 0 00:15:30.251 }, 00:15:30.251 "claimed": false, 00:15:30.251 "zoned": false, 00:15:30.251 "supported_io_types": { 00:15:30.251 "read": true, 00:15:30.251 "write": true, 00:15:30.251 "unmap": true, 00:15:30.251 "flush": true, 00:15:30.251 "reset": true, 00:15:30.251 "nvme_admin": true, 00:15:30.251 "nvme_io": true, 00:15:30.251 "nvme_io_md": false, 00:15:30.251 "write_zeroes": true, 00:15:30.251 "zcopy": false, 00:15:30.251 "get_zone_info": false, 00:15:30.251 "zone_management": false, 00:15:30.251 "zone_append": false, 00:15:30.251 "compare": true, 00:15:30.251 "compare_and_write": true, 00:15:30.251 "abort": true, 00:15:30.251 "seek_hole": false, 00:15:30.251 "seek_data": false, 00:15:30.251 "copy": true, 00:15:30.251 "nvme_iov_md": false 00:15:30.251 }, 00:15:30.251 "memory_domains": [ 00:15:30.251 { 00:15:30.251 "dma_device_id": "system", 00:15:30.251 "dma_device_type": 1 00:15:30.251 } 00:15:30.251 ], 00:15:30.251 "driver_specific": { 00:15:30.251 "nvme": [ 00:15:30.251 { 00:15:30.251 "trid": { 00:15:30.251 "trtype": "TCP", 00:15:30.251 "adrfam": "IPv4", 00:15:30.251 "traddr": "10.0.0.2", 00:15:30.251 "trsvcid": "4420", 00:15:30.251 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:15:30.251 }, 00:15:30.251 "ctrlr_data": { 00:15:30.251 "cntlid": 1, 00:15:30.251 "vendor_id": "0x8086", 00:15:30.251 "model_number": "SPDK bdev Controller", 00:15:30.251 "serial_number": "SPDK0", 00:15:30.251 "firmware_revision": "24.09", 00:15:30.251 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:15:30.251 "oacs": { 00:15:30.251 "security": 0, 00:15:30.251 "format": 0, 00:15:30.251 "firmware": 0, 00:15:30.251 "ns_manage": 0 00:15:30.251 }, 00:15:30.251 "multi_ctrlr": true, 00:15:30.251 "ana_reporting": false 00:15:30.251 }, 00:15:30.251 "vs": { 00:15:30.251 "nvme_version": "1.3" 00:15:30.251 }, 00:15:30.251 "ns_data": { 00:15:30.251 "id": 1, 00:15:30.251 "can_share": true 00:15:30.251 } 00:15:30.251 } 00:15:30.251 ], 00:15:30.251 "mp_policy": "active_passive" 00:15:30.251 } 00:15:30.251 } 00:15:30.251 ] 00:15:30.251 20:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1277266 00:15:30.251 20:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:15:30.251 20:29:22 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:15:30.513 Running I/O for 10 seconds... 00:15:31.455 Latency(us) 00:15:31.455 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:31.455 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:31.455 Nvme0n1 : 1.00 17993.00 70.29 0.00 0.00 0.00 0.00 0.00 00:15:31.455 =================================================================================================================== 00:15:31.455 Total : 17993.00 70.29 0.00 0.00 0.00 0.00 0.00 00:15:31.455 00:15:32.397 20:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:32.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:32.397 Nvme0n1 : 2.00 18117.00 70.77 0.00 0.00 0.00 0.00 0.00 00:15:32.397 =================================================================================================================== 00:15:32.397 Total : 18117.00 70.77 0.00 0.00 0.00 0.00 0.00 00:15:32.397 00:15:32.397 true 00:15:32.397 20:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:32.397 20:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:15:32.658 20:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:15:32.658 20:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:15:32.658 20:29:24 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1277266 00:15:33.612 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:33.612 Nvme0n1 : 3.00 18178.00 71.01 0.00 0.00 0.00 0.00 0.00 00:15:33.612 =================================================================================================================== 00:15:33.612 Total : 18178.00 71.01 0.00 0.00 0.00 0.00 0.00 00:15:33.612 00:15:34.559 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:34.559 Nvme0n1 : 4.00 18209.25 71.13 0.00 0.00 0.00 0.00 0.00 00:15:34.559 =================================================================================================================== 00:15:34.559 Total : 18209.25 71.13 0.00 0.00 0.00 0.00 0.00 00:15:34.559 00:15:35.500 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:35.500 Nvme0n1 : 5.00 18241.00 71.25 0.00 0.00 0.00 0.00 0.00 00:15:35.500 =================================================================================================================== 00:15:35.500 Total : 18241.00 71.25 0.00 0.00 0.00 0.00 0.00 00:15:35.500 00:15:36.439 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:36.439 Nvme0n1 : 6.00 18261.50 71.33 0.00 0.00 0.00 0.00 0.00 00:15:36.439 =================================================================================================================== 00:15:36.439 Total : 18261.50 71.33 0.00 0.00 0.00 0.00 0.00 00:15:36.439 00:15:37.376 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:37.376 Nvme0n1 : 7.00 18285.57 71.43 0.00 0.00 0.00 0.00 0.00 00:15:37.376 =================================================================================================================== 00:15:37.376 Total : 18285.57 71.43 0.00 0.00 0.00 0.00 0.00 00:15:37.376 00:15:38.317 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:38.317 Nvme0n1 : 8.00 18295.12 71.47 0.00 0.00 0.00 0.00 0.00 00:15:38.317 =================================================================================================================== 00:15:38.317 Total : 18295.12 71.47 0.00 0.00 0.00 0.00 0.00 00:15:38.317 00:15:39.703 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:39.703 Nvme0n1 : 9.00 18305.11 71.50 0.00 0.00 0.00 0.00 0.00 00:15:39.703 =================================================================================================================== 00:15:39.703 Total : 18305.11 71.50 0.00 0.00 0.00 0.00 0.00 00:15:39.703 00:15:40.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.645 Nvme0n1 : 10.00 18316.00 71.55 0.00 0.00 0.00 0.00 0.00 00:15:40.645 =================================================================================================================== 00:15:40.645 Total : 18316.00 71.55 0.00 0.00 0.00 0.00 0.00 00:15:40.645 00:15:40.645 00:15:40.645 Latency(us) 00:15:40.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.645 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:15:40.645 Nvme0n1 : 10.01 18317.23 71.55 0.00 0.00 6985.47 4396.37 16930.13 00:15:40.645 =================================================================================================================== 00:15:40.645 Total : 18317.23 71.55 0.00 0.00 6985.47 4396.37 16930.13 00:15:40.645 0 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1277110 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 1277110 ']' 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 1277110 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1277110 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1277110' 00:15:40.645 killing process with pid 1277110 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 1277110 00:15:40.645 Received shutdown signal, test time was about 10.000000 seconds 00:15:40.645 00:15:40.645 Latency(us) 00:15:40.645 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:40.645 =================================================================================================================== 00:15:40.645 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 1277110 00:15:40.645 20:29:32 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:15:40.906 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:15:40.906 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:15:40.906 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1273461 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1273461 00:15:41.166 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1273461 Killed "${NVMF_APP[@]}" "$@" 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=1279477 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 1279477 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 1279477 ']' 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:41.166 20:29:33 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:41.166 [2024-07-15 20:29:33.525697] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:41.166 [2024-07-15 20:29:33.525754] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:41.427 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.427 [2024-07-15 20:29:33.600593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:41.427 [2024-07-15 20:29:33.667019] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:41.427 [2024-07-15 20:29:33.667058] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:41.427 [2024-07-15 20:29:33.667065] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:41.427 [2024-07-15 20:29:33.667072] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:41.427 [2024-07-15 20:29:33.667078] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:41.427 [2024-07-15 20:29:33.667096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:42.000 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:42.000 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:15:42.000 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:42.000 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:42.000 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:42.000 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:42.000 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:42.262 [2024-07-15 20:29:34.468187] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:15:42.263 [2024-07-15 20:29:34.468284] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:15:42.263 [2024-07-15 20:29:34.468314] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev ab626df8-2995-4c14-8793-93ff8e1af2f0 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ab626df8-2995-4c14-8793-93ff8e1af2f0 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:42.263 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:42.524 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ab626df8-2995-4c14-8793-93ff8e1af2f0 -t 2000 00:15:42.524 [ 00:15:42.524 { 00:15:42.524 "name": "ab626df8-2995-4c14-8793-93ff8e1af2f0", 00:15:42.524 "aliases": [ 00:15:42.524 "lvs/lvol" 00:15:42.524 ], 00:15:42.524 "product_name": "Logical Volume", 00:15:42.524 "block_size": 4096, 00:15:42.524 "num_blocks": 38912, 00:15:42.524 "uuid": "ab626df8-2995-4c14-8793-93ff8e1af2f0", 00:15:42.524 "assigned_rate_limits": { 00:15:42.524 "rw_ios_per_sec": 0, 00:15:42.524 "rw_mbytes_per_sec": 0, 00:15:42.524 "r_mbytes_per_sec": 0, 00:15:42.524 "w_mbytes_per_sec": 0 00:15:42.524 }, 00:15:42.524 "claimed": false, 00:15:42.524 "zoned": false, 00:15:42.524 "supported_io_types": { 00:15:42.524 "read": true, 00:15:42.524 "write": true, 00:15:42.524 "unmap": true, 00:15:42.524 "flush": false, 00:15:42.524 "reset": true, 00:15:42.524 "nvme_admin": false, 00:15:42.524 "nvme_io": false, 00:15:42.524 "nvme_io_md": false, 00:15:42.524 "write_zeroes": true, 00:15:42.524 "zcopy": false, 00:15:42.524 "get_zone_info": false, 00:15:42.524 "zone_management": false, 00:15:42.524 "zone_append": false, 00:15:42.524 "compare": false, 00:15:42.524 "compare_and_write": false, 00:15:42.524 "abort": false, 00:15:42.524 "seek_hole": true, 00:15:42.524 "seek_data": true, 00:15:42.524 "copy": false, 00:15:42.524 "nvme_iov_md": false 00:15:42.524 }, 00:15:42.524 "driver_specific": { 00:15:42.524 "lvol": { 00:15:42.524 "lvol_store_uuid": "df57eb5f-1de6-4e7d-9a6e-6a4476919f49", 00:15:42.524 "base_bdev": "aio_bdev", 00:15:42.524 "thin_provision": false, 00:15:42.524 "num_allocated_clusters": 38, 00:15:42.524 "snapshot": false, 00:15:42.524 "clone": false, 00:15:42.524 "esnap_clone": false 00:15:42.524 } 00:15:42.524 } 00:15:42.524 } 00:15:42.524 ] 00:15:42.524 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:42.524 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:42.524 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:15:42.785 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:15:42.785 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:42.785 20:29:34 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:15:42.785 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:15:42.785 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:43.046 [2024-07-15 20:29:35.252109] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:43.046 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:43.306 request: 00:15:43.306 { 00:15:43.306 "uuid": "df57eb5f-1de6-4e7d-9a6e-6a4476919f49", 00:15:43.306 "method": "bdev_lvol_get_lvstores", 00:15:43.306 "req_id": 1 00:15:43.306 } 00:15:43.306 Got JSON-RPC error response 00:15:43.306 response: 00:15:43.306 { 00:15:43.306 "code": -19, 00:15:43.306 "message": "No such device" 00:15:43.306 } 00:15:43.306 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:15:43.306 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:43.306 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:43.306 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:43.306 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:15:43.306 aio_bdev 00:15:43.306 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev ab626df8-2995-4c14-8793-93ff8e1af2f0 00:15:43.307 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=ab626df8-2995-4c14-8793-93ff8e1af2f0 00:15:43.307 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:43.307 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:15:43.307 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:43.307 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:43.307 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:15:43.567 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b ab626df8-2995-4c14-8793-93ff8e1af2f0 -t 2000 00:15:43.567 [ 00:15:43.567 { 00:15:43.567 "name": "ab626df8-2995-4c14-8793-93ff8e1af2f0", 00:15:43.567 "aliases": [ 00:15:43.567 "lvs/lvol" 00:15:43.567 ], 00:15:43.567 "product_name": "Logical Volume", 00:15:43.567 "block_size": 4096, 00:15:43.567 "num_blocks": 38912, 00:15:43.567 "uuid": "ab626df8-2995-4c14-8793-93ff8e1af2f0", 00:15:43.567 "assigned_rate_limits": { 00:15:43.567 "rw_ios_per_sec": 0, 00:15:43.567 "rw_mbytes_per_sec": 0, 00:15:43.568 "r_mbytes_per_sec": 0, 00:15:43.568 "w_mbytes_per_sec": 0 00:15:43.568 }, 00:15:43.568 "claimed": false, 00:15:43.568 "zoned": false, 00:15:43.568 "supported_io_types": { 00:15:43.568 "read": true, 00:15:43.568 "write": true, 00:15:43.568 "unmap": true, 00:15:43.568 "flush": false, 00:15:43.568 "reset": true, 00:15:43.568 "nvme_admin": false, 00:15:43.568 "nvme_io": false, 00:15:43.568 "nvme_io_md": false, 00:15:43.568 "write_zeroes": true, 00:15:43.568 "zcopy": false, 00:15:43.568 "get_zone_info": false, 00:15:43.568 "zone_management": false, 00:15:43.568 "zone_append": false, 00:15:43.568 "compare": false, 00:15:43.568 "compare_and_write": false, 00:15:43.568 "abort": false, 00:15:43.568 "seek_hole": true, 00:15:43.568 "seek_data": true, 00:15:43.568 "copy": false, 00:15:43.568 "nvme_iov_md": false 00:15:43.568 }, 00:15:43.568 "driver_specific": { 00:15:43.568 "lvol": { 00:15:43.568 "lvol_store_uuid": "df57eb5f-1de6-4e7d-9a6e-6a4476919f49", 00:15:43.568 "base_bdev": "aio_bdev", 00:15:43.568 "thin_provision": false, 00:15:43.568 "num_allocated_clusters": 38, 00:15:43.568 "snapshot": false, 00:15:43.568 "clone": false, 00:15:43.568 "esnap_clone": false 00:15:43.568 } 00:15:43.568 } 00:15:43.568 } 00:15:43.568 ] 00:15:43.568 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:15:43.568 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:43.568 20:29:35 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:15:43.829 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:15:43.829 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:43.829 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:15:44.089 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:15:44.089 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete ab626df8-2995-4c14-8793-93ff8e1af2f0 00:15:44.089 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u df57eb5f-1de6-4e7d-9a6e-6a4476919f49 00:15:44.351 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:15:44.612 00:15:44.612 real 0m17.012s 00:15:44.612 user 0m44.324s 00:15:44.612 sys 0m2.900s 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:15:44.612 ************************************ 00:15:44.612 END TEST lvs_grow_dirty 00:15:44.612 ************************************ 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:15:44.612 nvmf_trace.0 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:44.612 rmmod nvme_tcp 00:15:44.612 rmmod nvme_fabrics 00:15:44.612 rmmod nvme_keyring 00:15:44.612 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:44.872 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:15:44.872 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:15:44.872 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 1279477 ']' 00:15:44.872 20:29:36 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 1279477 00:15:44.873 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 1279477 ']' 00:15:44.873 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 1279477 00:15:44.873 20:29:36 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1279477 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1279477' 00:15:44.873 killing process with pid 1279477 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 1279477 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 1279477 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:44.873 20:29:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.418 20:29:39 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:47.418 00:15:47.418 real 0m44.227s 00:15:47.418 user 1m5.741s 00:15:47.418 sys 0m10.600s 00:15:47.418 20:29:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:47.418 20:29:39 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:15:47.418 ************************************ 00:15:47.418 END TEST nvmf_lvs_grow 00:15:47.418 ************************************ 00:15:47.418 20:29:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:47.418 20:29:39 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:47.418 20:29:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:47.418 20:29:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:47.418 20:29:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:47.418 ************************************ 00:15:47.418 START TEST nvmf_bdev_io_wait 00:15:47.418 ************************************ 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:15:47.418 * Looking for test storage... 00:15:47.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:47.418 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:15:47.419 20:29:39 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:15:55.655 Found 0000:31:00.0 (0x8086 - 0x159b) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:15:55.655 Found 0000:31:00.1 (0x8086 - 0x159b) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:15:55.655 Found net devices under 0000:31:00.0: cvl_0_0 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:15:55.655 Found net devices under 0000:31:00.1: cvl_0_1 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:15:55.655 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:15:55.656 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:55.656 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.546 ms 00:15:55.656 00:15:55.656 --- 10.0.0.2 ping statistics --- 00:15:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.656 rtt min/avg/max/mdev = 0.546/0.546/0.546/0.000 ms 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:55.656 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:55.656 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:15:55.656 00:15:55.656 --- 10.0.0.1 ping statistics --- 00:15:55.656 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:55.656 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=1285014 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 1285014 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 1285014 ']' 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.656 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.656 20:29:47 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:55.656 [2024-07-15 20:29:47.686916] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:55.656 [2024-07-15 20:29:47.686963] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.656 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.656 [2024-07-15 20:29:47.766070] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:55.656 [2024-07-15 20:29:47.833279] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.656 [2024-07-15 20:29:47.833315] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.656 [2024-07-15 20:29:47.833323] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.656 [2024-07-15 20:29:47.833329] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.656 [2024-07-15 20:29:47.833335] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.656 [2024-07-15 20:29:47.836246] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.656 [2024-07-15 20:29:47.836426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.656 [2024-07-15 20:29:47.836636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:55.656 [2024-07-15 20:29:47.836636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 [2024-07-15 20:29:48.564518] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.227 Malloc0 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.227 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:56.487 [2024-07-15 20:29:48.636576] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1285059 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1285061 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:56.487 { 00:15:56.487 "params": { 00:15:56.487 "name": "Nvme$subsystem", 00:15:56.487 "trtype": "$TEST_TRANSPORT", 00:15:56.487 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.487 "adrfam": "ipv4", 00:15:56.487 "trsvcid": "$NVMF_PORT", 00:15:56.487 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.487 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.487 "hdgst": ${hdgst:-false}, 00:15:56.487 "ddgst": ${ddgst:-false} 00:15:56.487 }, 00:15:56.487 "method": "bdev_nvme_attach_controller" 00:15:56.487 } 00:15:56.487 EOF 00:15:56.487 )") 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1285064 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:15:56.487 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:56.488 { 00:15:56.488 "params": { 00:15:56.488 "name": "Nvme$subsystem", 00:15:56.488 "trtype": "$TEST_TRANSPORT", 00:15:56.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.488 "adrfam": "ipv4", 00:15:56.488 "trsvcid": "$NVMF_PORT", 00:15:56.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.488 "hdgst": ${hdgst:-false}, 00:15:56.488 "ddgst": ${ddgst:-false} 00:15:56.488 }, 00:15:56.488 "method": "bdev_nvme_attach_controller" 00:15:56.488 } 00:15:56.488 EOF 00:15:56.488 )") 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1285068 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:56.488 { 00:15:56.488 "params": { 00:15:56.488 "name": "Nvme$subsystem", 00:15:56.488 "trtype": "$TEST_TRANSPORT", 00:15:56.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.488 "adrfam": "ipv4", 00:15:56.488 "trsvcid": "$NVMF_PORT", 00:15:56.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.488 "hdgst": ${hdgst:-false}, 00:15:56.488 "ddgst": ${ddgst:-false} 00:15:56.488 }, 00:15:56.488 "method": "bdev_nvme_attach_controller" 00:15:56.488 } 00:15:56.488 EOF 00:15:56.488 )") 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:15:56.488 { 00:15:56.488 "params": { 00:15:56.488 "name": "Nvme$subsystem", 00:15:56.488 "trtype": "$TEST_TRANSPORT", 00:15:56.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:15:56.488 "adrfam": "ipv4", 00:15:56.488 "trsvcid": "$NVMF_PORT", 00:15:56.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:15:56.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:15:56.488 "hdgst": ${hdgst:-false}, 00:15:56.488 "ddgst": ${ddgst:-false} 00:15:56.488 }, 00:15:56.488 "method": "bdev_nvme_attach_controller" 00:15:56.488 } 00:15:56.488 EOF 00:15:56.488 )") 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1285059 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.488 "params": { 00:15:56.488 "name": "Nvme1", 00:15:56.488 "trtype": "tcp", 00:15:56.488 "traddr": "10.0.0.2", 00:15:56.488 "adrfam": "ipv4", 00:15:56.488 "trsvcid": "4420", 00:15:56.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.488 "hdgst": false, 00:15:56.488 "ddgst": false 00:15:56.488 }, 00:15:56.488 "method": "bdev_nvme_attach_controller" 00:15:56.488 }' 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.488 "params": { 00:15:56.488 "name": "Nvme1", 00:15:56.488 "trtype": "tcp", 00:15:56.488 "traddr": "10.0.0.2", 00:15:56.488 "adrfam": "ipv4", 00:15:56.488 "trsvcid": "4420", 00:15:56.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.488 "hdgst": false, 00:15:56.488 "ddgst": false 00:15:56.488 }, 00:15:56.488 "method": "bdev_nvme_attach_controller" 00:15:56.488 }' 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.488 "params": { 00:15:56.488 "name": "Nvme1", 00:15:56.488 "trtype": "tcp", 00:15:56.488 "traddr": "10.0.0.2", 00:15:56.488 "adrfam": "ipv4", 00:15:56.488 "trsvcid": "4420", 00:15:56.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.488 "hdgst": false, 00:15:56.488 "ddgst": false 00:15:56.488 }, 00:15:56.488 "method": "bdev_nvme_attach_controller" 00:15:56.488 }' 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:15:56.488 20:29:48 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:15:56.488 "params": { 00:15:56.488 "name": "Nvme1", 00:15:56.488 "trtype": "tcp", 00:15:56.488 "traddr": "10.0.0.2", 00:15:56.488 "adrfam": "ipv4", 00:15:56.488 "trsvcid": "4420", 00:15:56.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:15:56.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:15:56.488 "hdgst": false, 00:15:56.488 "ddgst": false 00:15:56.488 }, 00:15:56.488 "method": "bdev_nvme_attach_controller" 00:15:56.488 }' 00:15:56.488 [2024-07-15 20:29:48.690737] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:56.488 [2024-07-15 20:29:48.690787] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:15:56.488 [2024-07-15 20:29:48.691782] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:56.488 [2024-07-15 20:29:48.691821] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:15:56.488 [2024-07-15 20:29:48.693255] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:56.488 [2024-07-15 20:29:48.693301] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:15:56.488 [2024-07-15 20:29:48.701790] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:15:56.488 [2024-07-15 20:29:48.701860] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:15:56.488 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.488 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.488 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.488 [2024-07-15 20:29:48.826063] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.488 [2024-07-15 20:29:48.863013] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.749 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.749 [2024-07-15 20:29:48.877331] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:15:56.749 [2024-07-15 20:29:48.913268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:15:56.749 [2024-07-15 20:29:48.923269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.749 [2024-07-15 20:29:48.973271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:56.749 [2024-07-15 20:29:48.975220] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:15:56.749 [2024-07-15 20:29:49.023915] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:15:57.009 Running I/O for 1 seconds... 00:15:57.009 Running I/O for 1 seconds... 00:15:57.009 Running I/O for 1 seconds... 00:15:57.009 Running I/O for 1 seconds... 00:15:57.951 00:15:57.951 Latency(us) 00:15:57.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.951 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:15:57.951 Nvme1n1 : 1.00 21384.73 83.53 0.00 0.00 5971.43 3413.33 13434.88 00:15:57.951 =================================================================================================================== 00:15:57.951 Total : 21384.73 83.53 0.00 0.00 5971.43 3413.33 13434.88 00:15:57.951 00:15:57.951 Latency(us) 00:15:57.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.951 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:15:57.951 Nvme1n1 : 1.00 188263.43 735.40 0.00 0.00 677.42 271.36 1160.53 00:15:57.951 =================================================================================================================== 00:15:57.951 Total : 188263.43 735.40 0.00 0.00 677.42 271.36 1160.53 00:15:57.951 00:15:57.951 Latency(us) 00:15:57.951 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:57.951 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:15:57.951 Nvme1n1 : 1.01 11742.26 45.87 0.00 0.00 10863.53 5543.25 20862.29 00:15:57.951 =================================================================================================================== 00:15:57.951 Total : 11742.26 45.87 0.00 0.00 10863.53 5543.25 20862.29 00:15:58.212 00:15:58.212 Latency(us) 00:15:58.212 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:58.212 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:15:58.212 Nvme1n1 : 1.01 11884.96 46.43 0.00 0.00 10732.53 5980.16 22500.69 00:15:58.212 =================================================================================================================== 00:15:58.212 Total : 11884.96 46.43 0.00 0.00 10732.53 5980.16 22500.69 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1285061 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1285064 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1285068 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:58.212 rmmod nvme_tcp 00:15:58.212 rmmod nvme_fabrics 00:15:58.212 rmmod nvme_keyring 00:15:58.212 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 1285014 ']' 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 1285014 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 1285014 ']' 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 1285014 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1285014 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1285014' 00:15:58.473 killing process with pid 1285014 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 1285014 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 1285014 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:58.473 20:29:50 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.028 20:29:52 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:01.028 00:16:01.028 real 0m13.523s 00:16:01.028 user 0m19.381s 00:16:01.028 sys 0m7.584s 00:16:01.028 20:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:01.028 20:29:52 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:16:01.028 ************************************ 00:16:01.028 END TEST nvmf_bdev_io_wait 00:16:01.028 ************************************ 00:16:01.028 20:29:52 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:01.028 20:29:52 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:01.028 20:29:52 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:01.028 20:29:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:01.028 20:29:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:01.028 ************************************ 00:16:01.028 START TEST nvmf_queue_depth 00:16:01.028 ************************************ 00:16:01.028 20:29:52 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:16:01.028 * Looking for test storage... 00:16:01.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:01.028 20:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:01.029 20:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:01.029 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:01.029 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:01.029 20:29:53 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:16:01.029 20:29:53 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:09.170 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:09.170 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:09.170 Found net devices under 0000:31:00.0: cvl_0_0 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:09.170 Found net devices under 0000:31:00.1: cvl_0_1 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:09.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:09.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.511 ms 00:16:09.170 00:16:09.170 --- 10.0.0.2 ping statistics --- 00:16:09.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.170 rtt min/avg/max/mdev = 0.511/0.511/0.511/0.000 ms 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:09.170 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:09.170 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.254 ms 00:16:09.170 00:16:09.170 --- 10.0.0.1 ping statistics --- 00:16:09.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:09.170 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=1290150 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 1290150 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1290150 ']' 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:09.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.170 20:30:00 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.170 [2024-07-15 20:30:00.841743] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:16:09.170 [2024-07-15 20:30:00.841800] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:09.170 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.170 [2024-07-15 20:30:00.933144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.170 [2024-07-15 20:30:00.996841] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:09.170 [2024-07-15 20:30:00.996877] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:09.170 [2024-07-15 20:30:00.996885] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:09.170 [2024-07-15 20:30:00.996892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:09.170 [2024-07-15 20:30:00.996897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:09.170 [2024-07-15 20:30:00.996914] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.431 [2024-07-15 20:30:01.659496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.431 Malloc0 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.431 [2024-07-15 20:30:01.721542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1290531 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1290531 /var/tmp/bdevperf.sock 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 1290531 ']' 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:09.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:09.431 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:09.432 20:30:01 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:09.432 [2024-07-15 20:30:01.758101] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:16:09.432 [2024-07-15 20:30:01.758156] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1290531 ] 00:16:09.432 EAL: No free 2048 kB hugepages reported on node 1 00:16:09.693 [2024-07-15 20:30:01.827396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.693 [2024-07-15 20:30:01.894499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.265 20:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:10.265 20:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:16:10.265 20:30:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:16:10.265 20:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:10.265 20:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:10.526 NVMe0n1 00:16:10.526 20:30:02 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:10.526 20:30:02 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:16:10.526 Running I/O for 10 seconds... 00:16:20.523 00:16:20.523 Latency(us) 00:16:20.523 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.523 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:16:20.523 Verification LBA range: start 0x0 length 0x4000 00:16:20.523 NVMe0n1 : 10.04 11315.56 44.20 0.00 0.00 90197.41 5215.57 75147.95 00:16:20.523 =================================================================================================================== 00:16:20.523 Total : 11315.56 44.20 0.00 0.00 90197.41 5215.57 75147.95 00:16:20.523 0 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1290531 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1290531 ']' 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1290531 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290531 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.783 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290531' 00:16:20.783 killing process with pid 1290531 00:16:20.784 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1290531 00:16:20.784 Received shutdown signal, test time was about 10.000000 seconds 00:16:20.784 00:16:20.784 Latency(us) 00:16:20.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:20.784 =================================================================================================================== 00:16:20.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:20.784 20:30:12 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1290531 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:20.784 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:20.784 rmmod nvme_tcp 00:16:20.784 rmmod nvme_fabrics 00:16:20.784 rmmod nvme_keyring 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 1290150 ']' 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 1290150 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 1290150 ']' 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 1290150 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1290150 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1290150' 00:16:21.046 killing process with pid 1290150 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 1290150 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 1290150 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:21.046 20:30:13 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.594 20:30:15 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:23.594 00:16:23.594 real 0m22.509s 00:16:23.594 user 0m25.727s 00:16:23.594 sys 0m6.883s 00:16:23.594 20:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:23.594 20:30:15 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:16:23.594 ************************************ 00:16:23.594 END TEST nvmf_queue_depth 00:16:23.594 ************************************ 00:16:23.594 20:30:15 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:23.594 20:30:15 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:23.594 20:30:15 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:23.594 20:30:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:23.594 20:30:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:23.594 ************************************ 00:16:23.594 START TEST nvmf_target_multipath 00:16:23.594 ************************************ 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:16:23.594 * Looking for test storage... 00:16:23.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:16:23.594 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:16:23.595 20:30:15 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:31.729 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:31.730 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:31.730 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:31.730 Found net devices under 0000:31:00.0: cvl_0_0 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:31.730 Found net devices under 0000:31:00.1: cvl_0_1 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:31.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:31.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.607 ms 00:16:31.730 00:16:31.730 --- 10.0.0.2 ping statistics --- 00:16:31.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.730 rtt min/avg/max/mdev = 0.607/0.607/0.607/0.000 ms 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:31.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:31.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:16:31.730 00:16:31.730 --- 10.0.0.1 ping statistics --- 00:16:31.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:31.730 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:16:31.730 only one NIC for nvmf test 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:31.730 rmmod nvme_tcp 00:16:31.730 rmmod nvme_fabrics 00:16:31.730 rmmod nvme_keyring 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:31.730 20:30:23 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:33.646 00:16:33.646 real 0m10.392s 00:16:33.646 user 0m2.327s 00:16:33.646 sys 0m5.948s 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.646 20:30:25 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:16:33.646 ************************************ 00:16:33.646 END TEST nvmf_target_multipath 00:16:33.646 ************************************ 00:16:33.646 20:30:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:33.646 20:30:25 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:33.646 20:30:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:33.646 20:30:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.646 20:30:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:33.646 ************************************ 00:16:33.646 START TEST nvmf_zcopy 00:16:33.646 ************************************ 00:16:33.646 20:30:25 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:16:33.908 * Looking for test storage... 00:16:33.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:16:33.908 20:30:26 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:16:42.056 Found 0000:31:00.0 (0x8086 - 0x159b) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:16:42.056 Found 0000:31:00.1 (0x8086 - 0x159b) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:42.056 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:16:42.057 Found net devices under 0000:31:00.0: cvl_0_0 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:16:42.057 Found net devices under 0000:31:00.1: cvl_0_1 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:42.057 20:30:33 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:42.057 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:42.057 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.639 ms 00:16:42.057 00:16:42.057 --- 10.0.0.2 ping statistics --- 00:16:42.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.057 rtt min/avg/max/mdev = 0.639/0.639/0.639/0.000 ms 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:42.057 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:42.057 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.268 ms 00:16:42.057 00:16:42.057 --- 10.0.0.1 ping statistics --- 00:16:42.057 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:42.057 rtt min/avg/max/mdev = 0.268/0.268/0.268/0.000 ms 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=1302561 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 1302561 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 1302561 ']' 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:42.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:42.057 20:30:34 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.057 [2024-07-15 20:30:34.293995] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:16:42.057 [2024-07-15 20:30:34.294044] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:42.057 EAL: No free 2048 kB hugepages reported on node 1 00:16:42.057 [2024-07-15 20:30:34.385829] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:42.318 [2024-07-15 20:30:34.456675] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:42.318 [2024-07-15 20:30:34.456719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:42.318 [2024-07-15 20:30:34.456729] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:42.318 [2024-07-15 20:30:34.456737] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:42.318 [2024-07-15 20:30:34.456744] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:42.318 [2024-07-15 20:30:34.456774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 [2024-07-15 20:30:35.122226] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 [2024-07-15 20:30:35.146506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 malloc0 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:42.891 { 00:16:42.891 "params": { 00:16:42.891 "name": "Nvme$subsystem", 00:16:42.891 "trtype": "$TEST_TRANSPORT", 00:16:42.891 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:42.891 "adrfam": "ipv4", 00:16:42.891 "trsvcid": "$NVMF_PORT", 00:16:42.891 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:42.891 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:42.891 "hdgst": ${hdgst:-false}, 00:16:42.891 "ddgst": ${ddgst:-false} 00:16:42.891 }, 00:16:42.891 "method": "bdev_nvme_attach_controller" 00:16:42.891 } 00:16:42.891 EOF 00:16:42.891 )") 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:42.891 20:30:35 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:42.891 "params": { 00:16:42.891 "name": "Nvme1", 00:16:42.892 "trtype": "tcp", 00:16:42.892 "traddr": "10.0.0.2", 00:16:42.892 "adrfam": "ipv4", 00:16:42.892 "trsvcid": "4420", 00:16:42.892 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:42.892 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:42.892 "hdgst": false, 00:16:42.892 "ddgst": false 00:16:42.892 }, 00:16:42.892 "method": "bdev_nvme_attach_controller" 00:16:42.892 }' 00:16:42.892 [2024-07-15 20:30:35.245417] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:16:42.892 [2024-07-15 20:30:35.245482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1302737 ] 00:16:43.219 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.219 [2024-07-15 20:30:35.316352] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:43.219 [2024-07-15 20:30:35.389426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:43.481 Running I/O for 10 seconds... 00:16:53.494 00:16:53.494 Latency(us) 00:16:53.494 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.494 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:16:53.494 Verification LBA range: start 0x0 length 0x1000 00:16:53.494 Nvme1n1 : 10.01 8646.37 67.55 0.00 0.00 14751.56 696.32 29272.75 00:16:53.494 =================================================================================================================== 00:16:53.494 Total : 8646.37 67.55 0.00 0.00 14751.56 696.32 29272.75 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1304744 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:53.494 { 00:16:53.494 "params": { 00:16:53.494 "name": "Nvme$subsystem", 00:16:53.494 "trtype": "$TEST_TRANSPORT", 00:16:53.494 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:53.494 "adrfam": "ipv4", 00:16:53.494 "trsvcid": "$NVMF_PORT", 00:16:53.494 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:53.494 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:53.494 "hdgst": ${hdgst:-false}, 00:16:53.494 "ddgst": ${ddgst:-false} 00:16:53.494 }, 00:16:53.494 "method": "bdev_nvme_attach_controller" 00:16:53.494 } 00:16:53.494 EOF 00:16:53.494 )") 00:16:53.494 [2024-07-15 20:30:45.746363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.494 [2024-07-15 20:30:45.746394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:16:53.494 20:30:45 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:53.494 "params": { 00:16:53.494 "name": "Nvme1", 00:16:53.494 "trtype": "tcp", 00:16:53.494 "traddr": "10.0.0.2", 00:16:53.494 "adrfam": "ipv4", 00:16:53.494 "trsvcid": "4420", 00:16:53.494 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:16:53.494 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:16:53.494 "hdgst": false, 00:16:53.494 "ddgst": false 00:16:53.494 }, 00:16:53.494 "method": "bdev_nvme_attach_controller" 00:16:53.494 }' 00:16:53.494 [2024-07-15 20:30:45.758356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.494 [2024-07-15 20:30:45.758366] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.770385] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.770394] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.782417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.782425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.788178] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:16:53.495 [2024-07-15 20:30:45.788227] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1304744 ] 00:16:53.495 [2024-07-15 20:30:45.794448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.794456] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.806479] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.806487] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 EAL: No free 2048 kB hugepages reported on node 1 00:16:53.495 [2024-07-15 20:30:45.818508] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.818515] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.830539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.830546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.842571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.842579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.852433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:53.495 [2024-07-15 20:30:45.854602] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.854610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.495 [2024-07-15 20:30:45.866633] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.495 [2024-07-15 20:30:45.866642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.772 [2024-07-15 20:30:45.878665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.772 [2024-07-15 20:30:45.878674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.772 [2024-07-15 20:30:45.890695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.772 [2024-07-15 20:30:45.890710] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.902725] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.902735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.914756] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.914766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.917112] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:53.773 [2024-07-15 20:30:45.926786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.926795] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.938823] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.938838] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.950848] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.950858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.962880] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.962888] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.974910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.974918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.986940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.986948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:45.998984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:45.999000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.011007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.011017] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.023037] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.023048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.035069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.035079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.047099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.047108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.059128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.059137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.071161] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.071170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.083195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.083204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.095225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.095249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.107259] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.107267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.119293] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.119309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.131323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.131332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:53.773 [2024-07-15 20:30:46.143356] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:53.773 [2024-07-15 20:30:46.143364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.155386] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.155395] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.167417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.167425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.179457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.179473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 Running I/O for 5 seconds... 00:16:54.034 [2024-07-15 20:30:46.193967] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.193985] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.207558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.207575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.220941] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.220957] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.234689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.234707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.248171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.248188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.261539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.261555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.274807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.274823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.287942] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.287958] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.301401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.301417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.314781] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.314797] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.328051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.328068] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.341149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.341165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.354028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.354044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.366898] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.366919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.379596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.379613] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.392103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.392119] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.034 [2024-07-15 20:30:46.405515] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.034 [2024-07-15 20:30:46.405531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.295 [2024-07-15 20:30:46.418228] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.295 [2024-07-15 20:30:46.418249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.295 [2024-07-15 20:30:46.430532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.295 [2024-07-15 20:30:46.430548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.295 [2024-07-15 20:30:46.442868] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.295 [2024-07-15 20:30:46.442883] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.295 [2024-07-15 20:30:46.456396] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.456412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.469395] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.469411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.482556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.482572] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.495750] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.495766] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.508862] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.508877] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.522112] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.522128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.535205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.535222] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.548494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.548510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.561996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.562012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.575123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.575139] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.588402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.588417] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.600706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.600721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.613322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.613338] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.626729] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.626744] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.639866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.639881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.653523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.653538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.296 [2024-07-15 20:30:46.666546] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.296 [2024-07-15 20:30:46.666561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.679845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.679861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.692414] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.692429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.705007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.705022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.718264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.718280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.731656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.731671] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.744333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.744348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.757799] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.757815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.770871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.770886] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.784443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.784458] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.797832] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.797847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.810245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.810262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.823175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.823190] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.836160] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.836175] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.849646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.849661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.862858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.862874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.875467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.875483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.888446] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.888461] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.900908] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.900923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.913483] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.913498] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.557 [2024-07-15 20:30:46.926874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.557 [2024-07-15 20:30:46.926889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.819 [2024-07-15 20:30:46.939937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.819 [2024-07-15 20:30:46.939953] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.819 [2024-07-15 20:30:46.953538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.819 [2024-07-15 20:30:46.953554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.819 [2024-07-15 20:30:46.966825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.819 [2024-07-15 20:30:46.966841] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.819 [2024-07-15 20:30:46.979069] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.819 [2024-07-15 20:30:46.979084] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.819 [2024-07-15 20:30:46.992393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.819 [2024-07-15 20:30:46.992409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.004845] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.004860] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.017592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.017607] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.030909] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.030924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.043676] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.043691] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.056723] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.056738] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.070106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.070121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.082743] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.082759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.095733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.095749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.108328] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.108344] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.120786] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.120801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.133505] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.133520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.146438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.146453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.159451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.159466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.172349] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.172364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.185809] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.185825] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:54.820 [2024-07-15 20:30:47.198858] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:54.820 [2024-07-15 20:30:47.198874] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.211820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.211836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.224297] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.224312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.236831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.236847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.250079] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.250094] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.262589] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.262604] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.276121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.276136] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.288969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.288984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.301760] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.301776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.315018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.315034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.327889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.327905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.340256] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.340276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.353491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.353507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.366016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.366032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.378733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.378748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.391549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.391564] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.404532] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.404547] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.416638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.081 [2024-07-15 20:30:47.416653] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.081 [2024-07-15 20:30:47.428994] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.082 [2024-07-15 20:30:47.429010] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.082 [2024-07-15 20:30:47.442225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.082 [2024-07-15 20:30:47.442244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.082 [2024-07-15 20:30:47.454797] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.082 [2024-07-15 20:30:47.454813] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.467318] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.467334] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.480847] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.480861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.493835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.493850] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.506462] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.506477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.520184] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.520200] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.533018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.533034] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.545265] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.545280] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.558217] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.558239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.571526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.571542] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.584956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.584976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.598210] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.598226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.611449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.611465] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.624789] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.624805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.638171] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.638188] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.651366] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.651383] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.664411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.664427] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.677631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.677647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.690795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.690810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.703724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.703741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.716663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.716679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.730566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.730582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.372 [2024-07-15 20:30:47.744245] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.372 [2024-07-15 20:30:47.744261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.757109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.757125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.770433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.770449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.784072] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.784088] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.796536] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.796552] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.809952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.809968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.823543] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.823558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.836084] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.836104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.849336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.849351] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.862904] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.862920] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.876287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.876303] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.889099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.889115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.902296] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.902312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.914835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.914851] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.927740] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.927755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.941021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.941037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.954494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.954510] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.967522] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.967537] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.981030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.981046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:47.994261] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:47.994277] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.634 [2024-07-15 20:30:48.007443] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.634 [2024-07-15 20:30:48.007459] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.020219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.020240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.033511] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.033527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.046838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.046854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.060128] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.060144] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.073013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.073029] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.085428] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.085447] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.097851] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.097867] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.110195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.110210] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.123345] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.123360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.136640] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.136656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.149690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.149705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.162841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.162856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.176046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.176061] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.189208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.189224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.202657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.202672] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.215978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.215994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.228860] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.228875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.242389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.242405] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.255672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.255688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:55.896 [2024-07-15 20:30:48.269227] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:55.896 [2024-07-15 20:30:48.269247] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.158 [2024-07-15 20:30:48.281605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.158 [2024-07-15 20:30:48.281621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.158 [2024-07-15 20:30:48.294101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.158 [2024-07-15 20:30:48.294116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.306216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.306236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.319833] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.319848] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.332005] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.332025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.345339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.345354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.358630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.358646] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.372157] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.372173] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.385123] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.385138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.398087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.398102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.411467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.411483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.424457] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.424472] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.437910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.437925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.451208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.451224] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.463843] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.463858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.477144] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.477159] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.490626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.490642] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.503749] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.503765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.516464] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.516479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.159 [2024-07-15 20:30:48.529363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.159 [2024-07-15 20:30:48.529378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.542777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.542794] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.555923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.555938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.569024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.569039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.581866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.581881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.595035] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.595051] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.608360] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.608376] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.621182] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.621197] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.634130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.634145] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.647595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.647611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.661267] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.661283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.674773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.420 [2024-07-15 20:30:48.674787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.420 [2024-07-15 20:30:48.687409] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.687425] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.700772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.700787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.713298] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.713313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.726599] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.726615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.739867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.739882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.752364] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.752380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.765573] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.765588] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.779024] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.779040] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.421 [2024-07-15 20:30:48.791247] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.421 [2024-07-15 20:30:48.791262] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.804323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.804339] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.817200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.817216] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.830408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.830424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.843017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.843032] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.856563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.856578] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.869363] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.869378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.881928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.881943] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.895090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.895105] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.908438] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.908454] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.921627] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.921643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.934301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.934317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.947415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.947431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.960188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.960204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.973451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.973467] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:48.986922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:48.986938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:49.000136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:49.000153] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:49.013268] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:49.013285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:49.026374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:49.026391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:49.039153] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:49.039170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.681 [2024-07-15 20:30:49.052784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.681 [2024-07-15 20:30:49.052799] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.066148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.066164] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.078626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.078643] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.091706] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.091721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.104567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.104583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.117100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.117115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.130009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.130024] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.143097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.143112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.156545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.156560] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.169097] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.169112] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.181354] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.181369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.194934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.194949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.208301] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.208316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.220906] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.220922] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.233567] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.233583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.246672] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.246688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.259921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.259937] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.273596] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.273611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.286538] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.286554] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.299392] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.299410] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:56.942 [2024-07-15 20:30:49.312591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:56.942 [2024-07-15 20:30:49.312610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.325146] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.325163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.337735] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.337751] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.351320] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.351336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.364777] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.364793] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.378260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.378276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.391667] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.391684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.404287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.404302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.417475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.417491] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.429945] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.429961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.442087] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.442103] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.455375] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.455390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.468032] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.468047] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.481272] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.481288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.494118] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.494134] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.507506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.507522] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.521099] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.521115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.533984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.533999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.546716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.546732] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.560241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.560263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.573751] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.573767] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.586500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.586516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.599348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.599364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.611930] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.611945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.625389] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.625404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.637577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.637592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.651085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.651102] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.319 [2024-07-15 20:30:49.663520] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.319 [2024-07-15 20:30:49.663536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.676570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.676586] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.689957] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.689974] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.703460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.703476] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.716474] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.716490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.729678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.729694] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.743539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.743555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.756238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.756253] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.769106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.769121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.782193] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.782209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.794859] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.794875] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.807526] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.807546] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.820372] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.820388] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.833965] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.833981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.846866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.846882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.859888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.859904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.873121] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.873137] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.886653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.886668] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.899287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.899302] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.912745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.912761] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.925968] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.925984] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.938653] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.938669] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.581 [2024-07-15 20:30:49.951551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.581 [2024-07-15 20:30:49.951567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:49.964719] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:49.964735] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:49.978201] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:49.978217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:49.990497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:49.990513] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.003720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.003736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.017033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.017048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.030559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.030575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.044533] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.044549] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.057444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.057464] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.069885] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.069902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.082866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.082882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.095935] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.095950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.108718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.108733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.122831] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.122849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.135597] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.135614] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.148571] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.148587] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.161865] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.161881] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.174801] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.174817] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.188007] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.188022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.201039] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.201054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:57.841 [2024-07-15 20:30:50.213963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:57.841 [2024-07-15 20:30:50.213978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.227023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.227039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.239757] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.239772] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.252788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.252803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.265611] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.265627] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.278730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.278745] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.291516] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.291532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.304769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.304788] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.318214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.318233] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.331739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.331755] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.345170] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.345185] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.358764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.358780] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.372344] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.372360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.385094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.385109] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.398485] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.398500] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.411663] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.411678] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.424503] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.424519] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.437772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.437787] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.450670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.450685] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.463273] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.463288] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.103 [2024-07-15 20:30:50.476244] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.103 [2024-07-15 20:30:50.476259] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.488837] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.488853] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.502282] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.502298] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.515642] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.515656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.529089] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.529104] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.541966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.541981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.555054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.555069] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.567539] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.567555] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.580739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.580754] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.593923] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.593938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.607083] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.607098] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.620448] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.620463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.633962] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.633977] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.646819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.646834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.363 [2024-07-15 20:30:50.660252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.363 [2024-07-15 20:30:50.660267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.364 [2024-07-15 20:30:50.673846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.364 [2024-07-15 20:30:50.673861] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.364 [2024-07-15 20:30:50.687453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.364 [2024-07-15 20:30:50.687468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.364 [2024-07-15 20:30:50.701233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.364 [2024-07-15 20:30:50.701248] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.364 [2024-07-15 20:30:50.713795] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.364 [2024-07-15 20:30:50.713811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.364 [2024-07-15 20:30:50.727138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.364 [2024-07-15 20:30:50.727152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.364 [2024-07-15 20:30:50.740646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.364 [2024-07-15 20:30:50.740661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.753650] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.753665] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.767086] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.767101] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.780488] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.780503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.793151] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.793167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.806338] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.806354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.819517] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.819532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.832494] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.832509] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.845716] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.845731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.858564] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.858579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.871728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.871743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.884940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.884955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.897727] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.897743] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.909805] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.909820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.922606] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.922621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.935467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.935483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.948411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.948426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.961397] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.961412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.974523] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.974538] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:50.987106] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:50.987121] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.624 [2024-07-15 20:30:51.000415] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.624 [2024-07-15 20:30:51.000431] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.013659] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.013674] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.026990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.027005] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.040374] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.040390] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.053430] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.053445] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.066931] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.066947] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.079903] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.079919] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.093180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.093196] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.105818] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.105835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.118914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.118930] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.131991] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.132007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.145239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.145254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.158238] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.158254] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.171467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.171483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.185026] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.185042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.197684] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.197699] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 00:16:58.886 Latency(us) 00:16:58.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:58.886 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:16:58.886 Nvme1n1 : 5.01 19419.91 151.72 0.00 0.00 6583.46 2880.85 15947.09 00:16:58.886 =================================================================================================================== 00:16:58.886 Total : 19419.91 151.72 0.00 0.00 6583.46 2880.85 15947.09 00:16:58.886 [2024-07-15 20:30:51.206953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.206967] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.218984] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.218996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.231016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.231028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.243048] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.243067] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:58.886 [2024-07-15 20:30:51.255075] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:58.886 [2024-07-15 20:30:51.255086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.147 [2024-07-15 20:30:51.267103] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.147 [2024-07-15 20:30:51.267115] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.147 [2024-07-15 20:30:51.279134] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.147 [2024-07-15 20:30:51.279143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.147 [2024-07-15 20:30:51.291167] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.147 [2024-07-15 20:30:51.291178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.147 [2024-07-15 20:30:51.303195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.147 [2024-07-15 20:30:51.303205] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.147 [2024-07-15 20:30:51.315233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.147 [2024-07-15 20:30:51.315245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.147 [2024-07-15 20:30:51.327260] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:16:59.147 [2024-07-15 20:30:51.327268] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:59.147 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1304744) - No such process 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1304744 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:59.147 delay0 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.147 20:30:51 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:16:59.147 EAL: No free 2048 kB hugepages reported on node 1 00:16:59.147 [2024-07-15 20:30:51.463419] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:17:05.733 Initializing NVMe Controllers 00:17:05.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:05.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:05.733 Initialization complete. Launching workers. 00:17:05.733 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 121 00:17:05.733 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 400, failed to submit 41 00:17:05.733 success 263, unsuccess 137, failed 0 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:05.733 rmmod nvme_tcp 00:17:05.733 rmmod nvme_fabrics 00:17:05.733 rmmod nvme_keyring 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 1302561 ']' 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 1302561 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 1302561 ']' 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 1302561 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1302561 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1302561' 00:17:05.733 killing process with pid 1302561 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 1302561 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 1302561 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:05.733 20:30:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.643 20:30:59 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:07.643 00:17:07.643 real 0m33.895s 00:17:07.643 user 0m44.855s 00:17:07.643 sys 0m10.691s 00:17:07.643 20:30:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:07.643 20:30:59 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:17:07.643 ************************************ 00:17:07.643 END TEST nvmf_zcopy 00:17:07.643 ************************************ 00:17:07.643 20:30:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:07.643 20:30:59 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:07.643 20:30:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:07.643 20:30:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:07.643 20:30:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:07.643 ************************************ 00:17:07.643 START TEST nvmf_nmic 00:17:07.643 ************************************ 00:17:07.643 20:30:59 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:17:07.903 * Looking for test storage... 00:17:07.903 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.903 20:31:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:17:07.904 20:31:00 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:16.051 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:16.051 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:16.051 Found net devices under 0000:31:00.0: cvl_0_0 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:16.051 Found net devices under 0000:31:00.1: cvl_0_1 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:16.051 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:16.052 20:31:07 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:16.052 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:16.052 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.711 ms 00:17:16.052 00:17:16.052 --- 10.0.0.2 ping statistics --- 00:17:16.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.052 rtt min/avg/max/mdev = 0.711/0.711/0.711/0.000 ms 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:16.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:16.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:17:16.052 00:17:16.052 --- 10.0.0.1 ping statistics --- 00:17:16.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:16.052 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=1311766 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 1311766 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 1311766 ']' 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:16.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:16.052 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.052 [2024-07-15 20:31:08.211244] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:17:16.052 [2024-07-15 20:31:08.211295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:16.052 EAL: No free 2048 kB hugepages reported on node 1 00:17:16.052 [2024-07-15 20:31:08.285966] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:16.052 [2024-07-15 20:31:08.352777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:16.052 [2024-07-15 20:31:08.352812] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:16.052 [2024-07-15 20:31:08.352820] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:16.052 [2024-07-15 20:31:08.352827] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:16.052 [2024-07-15 20:31:08.352833] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:16.052 [2024-07-15 20:31:08.352966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:16.052 [2024-07-15 20:31:08.356243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:16.052 [2024-07-15 20:31:08.356347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:16.052 [2024-07-15 20:31:08.356439] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.624 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:16.624 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:17:16.624 20:31:08 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:16.624 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:16.624 20:31:08 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 [2024-07-15 20:31:09.021828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 Malloc0 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 [2024-07-15 20:31:09.081056] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:17:16.885 test case1: single bdev can't be used in multiple subsystems 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.885 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.886 [2024-07-15 20:31:09.117030] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:17:16.886 [2024-07-15 20:31:09.117050] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:17:16.886 [2024-07-15 20:31:09.117058] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:17:16.886 request: 00:17:16.886 { 00:17:16.886 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:17:16.886 "namespace": { 00:17:16.886 "bdev_name": "Malloc0", 00:17:16.886 "no_auto_visible": false 00:17:16.886 }, 00:17:16.886 "method": "nvmf_subsystem_add_ns", 00:17:16.886 "req_id": 1 00:17:16.886 } 00:17:16.886 Got JSON-RPC error response 00:17:16.886 response: 00:17:16.886 { 00:17:16.886 "code": -32602, 00:17:16.886 "message": "Invalid parameters" 00:17:16.886 } 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:17:16.886 Adding namespace failed - expected result. 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:17:16.886 test case2: host connect to nvmf target in multiple paths 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:16.886 [2024-07-15 20:31:09.129149] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.886 20:31:09 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:18.800 20:31:10 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:17:20.186 20:31:12 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:17:20.186 20:31:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:17:20.186 20:31:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:20.186 20:31:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:17:20.186 20:31:12 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:17:22.134 20:31:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:22.134 20:31:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:22.134 20:31:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:22.134 20:31:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:17:22.134 20:31:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:22.134 20:31:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:17:22.134 20:31:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:22.134 [global] 00:17:22.134 thread=1 00:17:22.134 invalidate=1 00:17:22.134 rw=write 00:17:22.134 time_based=1 00:17:22.134 runtime=1 00:17:22.134 ioengine=libaio 00:17:22.134 direct=1 00:17:22.134 bs=4096 00:17:22.134 iodepth=1 00:17:22.134 norandommap=0 00:17:22.134 numjobs=1 00:17:22.134 00:17:22.134 verify_dump=1 00:17:22.134 verify_backlog=512 00:17:22.134 verify_state_save=0 00:17:22.134 do_verify=1 00:17:22.134 verify=crc32c-intel 00:17:22.134 [job0] 00:17:22.134 filename=/dev/nvme0n1 00:17:22.134 Could not set queue depth (nvme0n1) 00:17:22.404 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:22.404 fio-3.35 00:17:22.404 Starting 1 thread 00:17:23.789 00:17:23.789 job0: (groupid=0, jobs=1): err= 0: pid=1313304: Mon Jul 15 20:31:15 2024 00:17:23.789 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:23.789 slat (nsec): min=6655, max=46069, avg=24172.55, stdev=5228.57 00:17:23.789 clat (usec): min=388, max=1243, avg=927.04, stdev=170.76 00:17:23.789 lat (usec): min=413, max=1267, avg=951.21, stdev=171.83 00:17:23.789 clat percentiles (usec): 00:17:23.789 | 1.00th=[ 586], 5.00th=[ 676], 10.00th=[ 717], 20.00th=[ 783], 00:17:23.789 | 30.00th=[ 816], 40.00th=[ 848], 50.00th=[ 898], 60.00th=[ 971], 00:17:23.789 | 70.00th=[ 1090], 80.00th=[ 1123], 90.00th=[ 1156], 95.00th=[ 1172], 00:17:23.789 | 99.00th=[ 1205], 99.50th=[ 1205], 99.90th=[ 1237], 99.95th=[ 1237], 00:17:23.789 | 99.99th=[ 1237] 00:17:23.789 write: IOPS=883, BW=3532KiB/s (3617kB/s)(3536KiB/1001msec); 0 zone resets 00:17:23.789 slat (usec): min=9, max=23879, avg=54.62, stdev=802.29 00:17:23.789 clat (usec): min=231, max=870, avg=510.62, stdev=118.35 00:17:23.789 lat (usec): min=242, max=24528, avg=565.24, stdev=815.90 00:17:23.789 clat percentiles (usec): 00:17:23.789 | 1.00th=[ 281], 5.00th=[ 302], 10.00th=[ 371], 20.00th=[ 404], 00:17:23.789 | 30.00th=[ 465], 40.00th=[ 478], 50.00th=[ 498], 60.00th=[ 529], 00:17:23.789 | 70.00th=[ 578], 80.00th=[ 611], 90.00th=[ 660], 95.00th=[ 725], 00:17:23.789 | 99.00th=[ 807], 99.50th=[ 832], 99.90th=[ 873], 99.95th=[ 873], 00:17:23.789 | 99.99th=[ 873] 00:17:23.789 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:17:23.789 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:23.789 lat (usec) : 250=0.14%, 500=32.45%, 750=34.24%, 1000=19.20% 00:17:23.789 lat (msec) : 2=13.97% 00:17:23.789 cpu : usr=2.20%, sys=3.50%, ctx=1399, majf=0, minf=1 00:17:23.789 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:23.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:23.789 issued rwts: total=512,884,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:23.789 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:23.789 00:17:23.789 Run status group 0 (all jobs): 00:17:23.789 READ: bw=2046KiB/s (2095kB/s), 2046KiB/s-2046KiB/s (2095kB/s-2095kB/s), io=2048KiB (2097kB), run=1001-1001msec 00:17:23.789 WRITE: bw=3532KiB/s (3617kB/s), 3532KiB/s-3532KiB/s (3617kB/s-3617kB/s), io=3536KiB (3621kB), run=1001-1001msec 00:17:23.789 00:17:23.789 Disk stats (read/write): 00:17:23.789 nvme0n1: ios=537/680, merge=0/0, ticks=1423/340, in_queue=1763, util=98.50% 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:23.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:23.789 rmmod nvme_tcp 00:17:23.789 rmmod nvme_fabrics 00:17:23.789 rmmod nvme_keyring 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 1311766 ']' 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 1311766 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 1311766 ']' 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 1311766 00:17:23.789 20:31:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:17:23.789 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:23.789 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1311766 00:17:23.789 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:23.789 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:23.789 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1311766' 00:17:23.789 killing process with pid 1311766 00:17:23.789 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 1311766 00:17:23.789 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 1311766 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:24.051 20:31:16 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:25.964 20:31:18 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:25.964 00:17:25.964 real 0m18.314s 00:17:25.964 user 0m49.413s 00:17:25.964 sys 0m6.790s 00:17:25.964 20:31:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:25.964 20:31:18 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:17:25.964 ************************************ 00:17:25.964 END TEST nvmf_nmic 00:17:25.964 ************************************ 00:17:25.964 20:31:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:25.964 20:31:18 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:25.964 20:31:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:25.964 20:31:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:25.964 20:31:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:26.224 ************************************ 00:17:26.224 START TEST nvmf_fio_target 00:17:26.224 ************************************ 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:17:26.224 * Looking for test storage... 00:17:26.224 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:17:26.224 20:31:18 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:17:34.370 Found 0000:31:00.0 (0x8086 - 0x159b) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:17:34.370 Found 0000:31:00.1 (0x8086 - 0x159b) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:17:34.370 Found net devices under 0000:31:00.0: cvl_0_0 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:17:34.370 Found net devices under 0000:31:00.1: cvl_0_1 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:34.370 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:34.371 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:34.371 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:17:34.371 00:17:34.371 --- 10.0.0.2 ping statistics --- 00:17:34.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.371 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:34.371 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:34.371 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:17:34.371 00:17:34.371 --- 10.0.0.1 ping statistics --- 00:17:34.371 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:34.371 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=1318140 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 1318140 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 1318140 ']' 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:34.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:34.371 20:31:26 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.371 [2024-07-15 20:31:26.595956] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:17:34.371 [2024-07-15 20:31:26.596019] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:34.371 EAL: No free 2048 kB hugepages reported on node 1 00:17:34.371 [2024-07-15 20:31:26.679922] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:34.631 [2024-07-15 20:31:26.754958] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:34.631 [2024-07-15 20:31:26.754999] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:34.631 [2024-07-15 20:31:26.755007] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:34.631 [2024-07-15 20:31:26.755013] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:34.631 [2024-07-15 20:31:26.755019] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:34.631 [2024-07-15 20:31:26.755164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:34.631 [2024-07-15 20:31:26.755369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:34.631 [2024-07-15 20:31:26.755371] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:34.631 [2024-07-15 20:31:26.755271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:35.200 20:31:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.200 20:31:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:17:35.200 20:31:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.200 20:31:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.200 20:31:27 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.200 20:31:27 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.200 20:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:35.200 [2024-07-15 20:31:27.569306] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:35.460 20:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.460 20:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:17:35.460 20:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.720 20:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:17:35.720 20:31:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.980 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:17:35.980 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:35.980 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:17:35.980 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:17:36.241 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:36.501 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:17:36.501 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:36.501 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:17:36.501 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:17:36.761 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:17:36.761 20:31:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:17:37.022 20:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:37.022 20:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:37.022 20:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:37.282 20:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:17:37.282 20:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:37.542 20:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:37.542 [2024-07-15 20:31:29.815147] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:37.542 20:31:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:17:37.803 20:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:17:38.064 20:31:30 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:39.450 20:31:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:17:39.450 20:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:17:39.450 20:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:17:39.450 20:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:17:39.450 20:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:17:39.450 20:31:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:17:41.382 20:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:17:41.382 20:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:17:41.382 20:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:17:41.382 20:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:17:41.382 20:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:17:41.382 20:31:33 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:17:41.382 20:31:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:17:41.382 [global] 00:17:41.382 thread=1 00:17:41.382 invalidate=1 00:17:41.382 rw=write 00:17:41.382 time_based=1 00:17:41.382 runtime=1 00:17:41.382 ioengine=libaio 00:17:41.382 direct=1 00:17:41.382 bs=4096 00:17:41.382 iodepth=1 00:17:41.382 norandommap=0 00:17:41.382 numjobs=1 00:17:41.382 00:17:41.382 verify_dump=1 00:17:41.382 verify_backlog=512 00:17:41.382 verify_state_save=0 00:17:41.382 do_verify=1 00:17:41.382 verify=crc32c-intel 00:17:41.672 [job0] 00:17:41.672 filename=/dev/nvme0n1 00:17:41.672 [job1] 00:17:41.672 filename=/dev/nvme0n2 00:17:41.672 [job2] 00:17:41.672 filename=/dev/nvme0n3 00:17:41.672 [job3] 00:17:41.672 filename=/dev/nvme0n4 00:17:41.672 Could not set queue depth (nvme0n1) 00:17:41.672 Could not set queue depth (nvme0n2) 00:17:41.672 Could not set queue depth (nvme0n3) 00:17:41.672 Could not set queue depth (nvme0n4) 00:17:41.939 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:41.939 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:41.939 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:41.939 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:41.939 fio-3.35 00:17:41.939 Starting 4 threads 00:17:43.322 00:17:43.322 job0: (groupid=0, jobs=1): err= 0: pid=1319906: Mon Jul 15 20:31:35 2024 00:17:43.322 read: IOPS=410, BW=1643KiB/s (1683kB/s)(1704KiB/1037msec) 00:17:43.322 slat (nsec): min=6440, max=44770, avg=21182.52, stdev=6829.91 00:17:43.322 clat (usec): min=578, max=42025, avg=1487.49, stdev=4825.75 00:17:43.322 lat (usec): min=603, max=42049, avg=1508.67, stdev=4826.59 00:17:43.322 clat percentiles (usec): 00:17:43.322 | 1.00th=[ 627], 5.00th=[ 734], 10.00th=[ 758], 20.00th=[ 807], 00:17:43.322 | 30.00th=[ 848], 40.00th=[ 873], 50.00th=[ 889], 60.00th=[ 914], 00:17:43.322 | 70.00th=[ 971], 80.00th=[ 1057], 90.00th=[ 1123], 95.00th=[ 1172], 00:17:43.322 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:43.322 | 99.99th=[42206] 00:17:43.322 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:17:43.322 slat (nsec): min=9859, max=68990, avg=30275.50, stdev=6897.03 00:17:43.322 clat (usec): min=391, max=1187, avg=725.38, stdev=141.72 00:17:43.322 lat (usec): min=422, max=1219, avg=755.65, stdev=142.64 00:17:43.322 clat percentiles (usec): 00:17:43.322 | 1.00th=[ 424], 5.00th=[ 502], 10.00th=[ 545], 20.00th=[ 603], 00:17:43.322 | 30.00th=[ 652], 40.00th=[ 685], 50.00th=[ 725], 60.00th=[ 758], 00:17:43.322 | 70.00th=[ 791], 80.00th=[ 832], 90.00th=[ 914], 95.00th=[ 996], 00:17:43.322 | 99.00th=[ 1057], 99.50th=[ 1090], 99.90th=[ 1188], 99.95th=[ 1188], 00:17:43.322 | 99.99th=[ 1188] 00:17:43.322 bw ( KiB/s): min= 4096, max= 4096, per=45.65%, avg=4096.00, stdev= 0.00, samples=1 00:17:43.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:43.322 lat (usec) : 500=2.56%, 750=32.84%, 1000=51.07% 00:17:43.322 lat (msec) : 2=12.90%, 50=0.64% 00:17:43.322 cpu : usr=1.54%, sys=2.22%, ctx=939, majf=0, minf=1 00:17:43.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.322 issued rwts: total=426,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.322 job1: (groupid=0, jobs=1): err= 0: pid=1319908: Mon Jul 15 20:31:35 2024 00:17:43.322 read: IOPS=17, BW=69.2KiB/s (70.8kB/s)(72.0KiB/1041msec) 00:17:43.322 slat (nsec): min=25958, max=27222, avg=26492.06, stdev=327.81 00:17:43.322 clat (usec): min=40920, max=42051, avg=41798.23, stdev=383.74 00:17:43.322 lat (usec): min=40947, max=42077, avg=41824.72, stdev=383.59 00:17:43.322 clat percentiles (usec): 00:17:43.322 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:17:43.322 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:17:43.322 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:43.322 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:43.322 | 99.99th=[42206] 00:17:43.322 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:17:43.322 slat (nsec): min=9616, max=68373, avg=28645.79, stdev=11391.47 00:17:43.322 clat (usec): min=175, max=988, avg=523.40, stdev=155.22 00:17:43.322 lat (usec): min=187, max=1022, avg=552.04, stdev=160.61 00:17:43.322 clat percentiles (usec): 00:17:43.322 | 1.00th=[ 265], 5.00th=[ 306], 10.00th=[ 334], 20.00th=[ 379], 00:17:43.322 | 30.00th=[ 420], 40.00th=[ 461], 50.00th=[ 502], 60.00th=[ 553], 00:17:43.322 | 70.00th=[ 603], 80.00th=[ 668], 90.00th=[ 750], 95.00th=[ 807], 00:17:43.322 | 99.00th=[ 857], 99.50th=[ 873], 99.90th=[ 988], 99.95th=[ 988], 00:17:43.322 | 99.99th=[ 988] 00:17:43.322 bw ( KiB/s): min= 4096, max= 4096, per=45.65%, avg=4096.00, stdev= 0.00, samples=1 00:17:43.322 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:43.322 lat (usec) : 250=0.75%, 500=46.79%, 750=39.43%, 1000=9.62% 00:17:43.322 lat (msec) : 50=3.40% 00:17:43.322 cpu : usr=0.77%, sys=1.83%, ctx=532, majf=0, minf=1 00:17:43.322 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.322 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.322 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.322 issued rwts: total=18,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.322 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.322 job2: (groupid=0, jobs=1): err= 0: pid=1319909: Mon Jul 15 20:31:35 2024 00:17:43.322 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:43.323 slat (nsec): min=7743, max=59756, avg=28847.43, stdev=4645.20 00:17:43.323 clat (usec): min=650, max=1298, avg=999.30, stdev=117.48 00:17:43.323 lat (usec): min=681, max=1324, avg=1028.14, stdev=115.66 00:17:43.323 clat percentiles (usec): 00:17:43.323 | 1.00th=[ 758], 5.00th=[ 824], 10.00th=[ 857], 20.00th=[ 898], 00:17:43.323 | 30.00th=[ 914], 40.00th=[ 947], 50.00th=[ 988], 60.00th=[ 1057], 00:17:43.323 | 70.00th=[ 1090], 80.00th=[ 1106], 90.00th=[ 1139], 95.00th=[ 1172], 00:17:43.323 | 99.00th=[ 1237], 99.50th=[ 1254], 99.90th=[ 1303], 99.95th=[ 1303], 00:17:43.323 | 99.99th=[ 1303] 00:17:43.323 write: IOPS=798, BW=3193KiB/s (3269kB/s)(3196KiB/1001msec); 0 zone resets 00:17:43.323 slat (nsec): min=9277, max=70082, avg=31803.47, stdev=11440.48 00:17:43.323 clat (usec): min=187, max=4187, avg=545.68, stdev=173.57 00:17:43.323 lat (usec): min=202, max=4237, avg=577.48, stdev=175.80 00:17:43.323 clat percentiles (usec): 00:17:43.323 | 1.00th=[ 314], 5.00th=[ 379], 10.00th=[ 408], 20.00th=[ 449], 00:17:43.323 | 30.00th=[ 486], 40.00th=[ 502], 50.00th=[ 523], 60.00th=[ 537], 00:17:43.323 | 70.00th=[ 578], 80.00th=[ 644], 90.00th=[ 725], 95.00th=[ 758], 00:17:43.323 | 99.00th=[ 816], 99.50th=[ 857], 99.90th=[ 4178], 99.95th=[ 4178], 00:17:43.323 | 99.99th=[ 4178] 00:17:43.323 bw ( KiB/s): min= 4096, max= 4096, per=45.65%, avg=4096.00, stdev= 0.00, samples=1 00:17:43.323 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:43.323 lat (usec) : 250=0.08%, 500=23.57%, 750=34.02%, 1000=23.19% 00:17:43.323 lat (msec) : 2=19.07%, 10=0.08% 00:17:43.323 cpu : usr=2.50%, sys=5.40%, ctx=1313, majf=0, minf=1 00:17:43.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.323 issued rwts: total=512,799,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.323 job3: (groupid=0, jobs=1): err= 0: pid=1319910: Mon Jul 15 20:31:35 2024 00:17:43.323 read: IOPS=96, BW=386KiB/s (395kB/s)(396KiB/1027msec) 00:17:43.323 slat (nsec): min=7352, max=42855, avg=24516.05, stdev=3577.85 00:17:43.323 clat (usec): min=837, max=42107, avg=6770.63, stdev=14198.96 00:17:43.323 lat (usec): min=862, max=42132, avg=6795.15, stdev=14199.03 00:17:43.323 clat percentiles (usec): 00:17:43.323 | 1.00th=[ 840], 5.00th=[ 906], 10.00th=[ 955], 20.00th=[ 979], 00:17:43.323 | 30.00th=[ 1004], 40.00th=[ 1037], 50.00th=[ 1057], 60.00th=[ 1090], 00:17:43.323 | 70.00th=[ 1106], 80.00th=[ 1156], 90.00th=[41157], 95.00th=[41681], 00:17:43.323 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:43.323 | 99.99th=[42206] 00:17:43.323 write: IOPS=498, BW=1994KiB/s (2042kB/s)(2048KiB/1027msec); 0 zone resets 00:17:43.323 slat (nsec): min=9612, max=51175, avg=28978.07, stdev=7912.33 00:17:43.323 clat (usec): min=191, max=1088, avg=654.55, stdev=131.67 00:17:43.323 lat (usec): min=224, max=1120, avg=683.52, stdev=133.99 00:17:43.323 clat percentiles (usec): 00:17:43.323 | 1.00th=[ 347], 5.00th=[ 437], 10.00th=[ 478], 20.00th=[ 545], 00:17:43.323 | 30.00th=[ 586], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 685], 00:17:43.323 | 70.00th=[ 725], 80.00th=[ 766], 90.00th=[ 832], 95.00th=[ 881], 00:17:43.323 | 99.00th=[ 947], 99.50th=[ 988], 99.90th=[ 1090], 99.95th=[ 1090], 00:17:43.323 | 99.99th=[ 1090] 00:17:43.323 bw ( KiB/s): min= 4096, max= 4096, per=45.65%, avg=4096.00, stdev= 0.00, samples=1 00:17:43.323 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:43.323 lat (usec) : 250=0.16%, 500=10.15%, 750=53.85%, 1000=23.90% 00:17:43.323 lat (msec) : 2=9.66%, 50=2.29% 00:17:43.323 cpu : usr=1.46%, sys=1.07%, ctx=611, majf=0, minf=1 00:17:43.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:43.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:43.323 issued rwts: total=99,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:43.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:43.323 00:17:43.323 Run status group 0 (all jobs): 00:17:43.323 READ: bw=4054KiB/s (4151kB/s), 69.2KiB/s-2046KiB/s (70.8kB/s-2095kB/s), io=4220KiB (4321kB), run=1001-1041msec 00:17:43.323 WRITE: bw=8972KiB/s (9187kB/s), 1967KiB/s-3193KiB/s (2015kB/s-3269kB/s), io=9340KiB (9564kB), run=1001-1041msec 00:17:43.323 00:17:43.323 Disk stats (read/write): 00:17:43.323 nvme0n1: ios=393/512, merge=0/0, ticks=486/356, in_queue=842, util=87.27% 00:17:43.323 nvme0n2: ios=66/512, merge=0/0, ticks=742/219, in_queue=961, util=100.00% 00:17:43.323 nvme0n3: ios=535/554, merge=0/0, ticks=1411/239, in_queue=1650, util=96.40% 00:17:43.323 nvme0n4: ios=53/512, merge=0/0, ticks=683/320, in_queue=1003, util=91.64% 00:17:43.323 20:31:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:17:43.323 [global] 00:17:43.323 thread=1 00:17:43.323 invalidate=1 00:17:43.323 rw=randwrite 00:17:43.323 time_based=1 00:17:43.323 runtime=1 00:17:43.323 ioengine=libaio 00:17:43.323 direct=1 00:17:43.323 bs=4096 00:17:43.323 iodepth=1 00:17:43.323 norandommap=0 00:17:43.323 numjobs=1 00:17:43.323 00:17:43.323 verify_dump=1 00:17:43.323 verify_backlog=512 00:17:43.323 verify_state_save=0 00:17:43.323 do_verify=1 00:17:43.323 verify=crc32c-intel 00:17:43.323 [job0] 00:17:43.323 filename=/dev/nvme0n1 00:17:43.323 [job1] 00:17:43.323 filename=/dev/nvme0n2 00:17:43.323 [job2] 00:17:43.323 filename=/dev/nvme0n3 00:17:43.323 [job3] 00:17:43.323 filename=/dev/nvme0n4 00:17:43.323 Could not set queue depth (nvme0n1) 00:17:43.323 Could not set queue depth (nvme0n2) 00:17:43.323 Could not set queue depth (nvme0n3) 00:17:43.323 Could not set queue depth (nvme0n4) 00:17:43.583 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.583 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.583 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.583 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:43.583 fio-3.35 00:17:43.583 Starting 4 threads 00:17:44.980 00:17:44.980 job0: (groupid=0, jobs=1): err= 0: pid=1320423: Mon Jul 15 20:31:37 2024 00:17:44.980 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:44.980 slat (nsec): min=7077, max=45263, avg=24304.18, stdev=3089.90 00:17:44.980 clat (usec): min=603, max=1346, avg=1093.53, stdev=77.63 00:17:44.980 lat (usec): min=627, max=1371, avg=1117.83, stdev=77.92 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 865], 5.00th=[ 955], 10.00th=[ 996], 20.00th=[ 1045], 00:17:44.980 | 30.00th=[ 1074], 40.00th=[ 1090], 50.00th=[ 1106], 60.00th=[ 1123], 00:17:44.980 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1172], 95.00th=[ 1188], 00:17:44.980 | 99.00th=[ 1254], 99.50th=[ 1287], 99.90th=[ 1352], 99.95th=[ 1352], 00:17:44.980 | 99.99th=[ 1352] 00:17:44.980 write: IOPS=588, BW=2354KiB/s (2410kB/s)(2356KiB/1001msec); 0 zone resets 00:17:44.980 slat (nsec): min=8885, max=71216, avg=25806.83, stdev=9353.69 00:17:44.980 clat (usec): min=328, max=931, avg=686.18, stdev=110.45 00:17:44.980 lat (usec): min=358, max=942, avg=711.98, stdev=114.61 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 412], 5.00th=[ 469], 10.00th=[ 537], 20.00th=[ 586], 00:17:44.980 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 693], 60.00th=[ 734], 00:17:44.980 | 70.00th=[ 758], 80.00th=[ 783], 90.00th=[ 824], 95.00th=[ 848], 00:17:44.980 | 99.00th=[ 889], 99.50th=[ 898], 99.90th=[ 930], 99.95th=[ 930], 00:17:44.980 | 99.99th=[ 930] 00:17:44.980 bw ( KiB/s): min= 4096, max= 4096, per=38.79%, avg=4096.00, stdev= 0.00, samples=1 00:17:44.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:44.980 lat (usec) : 500=3.36%, 750=32.24%, 1000=23.16% 00:17:44.980 lat (msec) : 2=41.24% 00:17:44.980 cpu : usr=2.00%, sys=2.50%, ctx=1102, majf=0, minf=1 00:17:44.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:44.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 issued rwts: total=512,589,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:44.980 job1: (groupid=0, jobs=1): err= 0: pid=1320427: Mon Jul 15 20:31:37 2024 00:17:44.980 read: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec) 00:17:44.980 slat (nsec): min=6693, max=62361, avg=26521.07, stdev=3706.88 00:17:44.980 clat (usec): min=722, max=1387, avg=1065.18, stdev=116.20 00:17:44.980 lat (usec): min=749, max=1417, avg=1091.70, stdev=116.04 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 799], 5.00th=[ 865], 10.00th=[ 898], 20.00th=[ 947], 00:17:44.980 | 30.00th=[ 1012], 40.00th=[ 1074], 50.00th=[ 1090], 60.00th=[ 1123], 00:17:44.980 | 70.00th=[ 1139], 80.00th=[ 1156], 90.00th=[ 1188], 95.00th=[ 1221], 00:17:44.980 | 99.00th=[ 1303], 99.50th=[ 1336], 99.90th=[ 1385], 99.95th=[ 1385], 00:17:44.980 | 99.99th=[ 1385] 00:17:44.980 write: IOPS=622, BW=2490KiB/s (2549kB/s)(2492KiB/1001msec); 0 zone resets 00:17:44.980 slat (nsec): min=8386, max=52421, avg=27650.91, stdev=9583.55 00:17:44.980 clat (usec): min=284, max=944, avg=666.51, stdev=105.01 00:17:44.980 lat (usec): min=316, max=976, avg=694.16, stdev=110.05 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 400], 5.00th=[ 449], 10.00th=[ 529], 20.00th=[ 594], 00:17:44.980 | 30.00th=[ 627], 40.00th=[ 652], 50.00th=[ 668], 60.00th=[ 693], 00:17:44.980 | 70.00th=[ 734], 80.00th=[ 766], 90.00th=[ 799], 95.00th=[ 816], 00:17:44.980 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 947], 99.95th=[ 947], 00:17:44.980 | 99.99th=[ 947] 00:17:44.980 bw ( KiB/s): min= 4096, max= 4096, per=38.79%, avg=4096.00, stdev= 0.00, samples=1 00:17:44.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:44.980 lat (usec) : 500=3.88%, 750=38.50%, 1000=25.64% 00:17:44.980 lat (msec) : 2=31.98% 00:17:44.980 cpu : usr=1.80%, sys=4.70%, ctx=1137, majf=0, minf=1 00:17:44.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:44.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 issued rwts: total=512,623,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:44.980 job2: (groupid=0, jobs=1): err= 0: pid=1320430: Mon Jul 15 20:31:37 2024 00:17:44.980 read: IOPS=607, BW=2430KiB/s (2488kB/s)(2432KiB/1001msec) 00:17:44.980 slat (nsec): min=6247, max=52301, avg=23215.11, stdev=8915.42 00:17:44.980 clat (usec): min=244, max=1083, avg=730.92, stdev=174.07 00:17:44.980 lat (usec): min=252, max=1109, avg=754.13, stdev=177.18 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 383], 5.00th=[ 429], 10.00th=[ 490], 20.00th=[ 578], 00:17:44.980 | 30.00th=[ 619], 40.00th=[ 668], 50.00th=[ 734], 60.00th=[ 807], 00:17:44.980 | 70.00th=[ 857], 80.00th=[ 906], 90.00th=[ 955], 95.00th=[ 979], 00:17:44.980 | 99.00th=[ 1029], 99.50th=[ 1045], 99.90th=[ 1090], 99.95th=[ 1090], 00:17:44.980 | 99.99th=[ 1090] 00:17:44.980 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:17:44.980 slat (nsec): min=8706, max=64110, avg=27066.65, stdev=10097.14 00:17:44.980 clat (usec): min=149, max=773, avg=490.94, stdev=115.13 00:17:44.980 lat (usec): min=160, max=823, avg=518.01, stdev=119.61 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 273], 5.00th=[ 297], 10.00th=[ 322], 20.00th=[ 383], 00:17:44.980 | 30.00th=[ 416], 40.00th=[ 465], 50.00th=[ 498], 60.00th=[ 529], 00:17:44.980 | 70.00th=[ 562], 80.00th=[ 603], 90.00th=[ 644], 95.00th=[ 676], 00:17:44.980 | 99.00th=[ 717], 99.50th=[ 725], 99.90th=[ 766], 99.95th=[ 775], 00:17:44.980 | 99.99th=[ 775] 00:17:44.980 bw ( KiB/s): min= 4096, max= 4096, per=38.79%, avg=4096.00, stdev= 0.00, samples=1 00:17:44.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:44.980 lat (usec) : 250=0.25%, 500=35.60%, 750=46.02%, 1000=17.10% 00:17:44.980 lat (msec) : 2=1.04% 00:17:44.980 cpu : usr=3.30%, sys=5.40%, ctx=1632, majf=0, minf=1 00:17:44.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:44.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 issued rwts: total=608,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:44.980 job3: (groupid=0, jobs=1): err= 0: pid=1320431: Mon Jul 15 20:31:37 2024 00:17:44.980 read: IOPS=22, BW=88.4KiB/s (90.5kB/s)(92.0KiB/1041msec) 00:17:44.980 slat (nsec): min=6802, max=73993, avg=24466.30, stdev=12835.60 00:17:44.980 clat (usec): min=840, max=42019, avg=29163.02, stdev=19040.56 00:17:44.980 lat (usec): min=847, max=42044, avg=29187.48, stdev=19047.11 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 840], 5.00th=[ 930], 10.00th=[ 947], 20.00th=[ 1074], 00:17:44.980 | 30.00th=[ 1221], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:17:44.980 | 70.00th=[41681], 80.00th=[41681], 90.00th=[42206], 95.00th=[42206], 00:17:44.980 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:44.980 | 99.99th=[42206] 00:17:44.980 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:17:44.980 slat (nsec): min=8657, max=66496, avg=27988.98, stdev=9011.07 00:17:44.980 clat (usec): min=321, max=986, avg=687.07, stdev=102.51 00:17:44.980 lat (usec): min=336, max=996, avg=715.06, stdev=105.52 00:17:44.980 clat percentiles (usec): 00:17:44.980 | 1.00th=[ 433], 5.00th=[ 502], 10.00th=[ 545], 20.00th=[ 603], 00:17:44.980 | 30.00th=[ 644], 40.00th=[ 668], 50.00th=[ 701], 60.00th=[ 725], 00:17:44.980 | 70.00th=[ 742], 80.00th=[ 775], 90.00th=[ 807], 95.00th=[ 832], 00:17:44.980 | 99.00th=[ 865], 99.50th=[ 889], 99.90th=[ 988], 99.95th=[ 988], 00:17:44.980 | 99.99th=[ 988] 00:17:44.980 bw ( KiB/s): min= 4096, max= 4096, per=38.79%, avg=4096.00, stdev= 0.00, samples=1 00:17:44.980 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:17:44.980 lat (usec) : 500=4.67%, 750=63.55%, 1000=28.22% 00:17:44.980 lat (msec) : 2=0.56%, 50=2.99% 00:17:44.980 cpu : usr=0.77%, sys=1.73%, ctx=536, majf=0, minf=1 00:17:44.980 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:44.980 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:44.980 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:44.980 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:44.980 00:17:44.980 Run status group 0 (all jobs): 00:17:44.980 READ: bw=6359KiB/s (6512kB/s), 88.4KiB/s-2430KiB/s (90.5kB/s-2488kB/s), io=6620KiB (6779kB), run=1001-1041msec 00:17:44.980 WRITE: bw=10.3MiB/s (10.8MB/s), 1967KiB/s-4092KiB/s (2015kB/s-4190kB/s), io=10.7MiB (11.3MB), run=1001-1041msec 00:17:44.980 00:17:44.980 Disk stats (read/write): 00:17:44.980 nvme0n1: ios=466/512, merge=0/0, ticks=593/344, in_queue=937, util=91.98% 00:17:44.980 nvme0n2: ios=475/512, merge=0/0, ticks=782/277, in_queue=1059, util=91.74% 00:17:44.980 nvme0n3: ios=512/852, merge=0/0, ticks=307/313, in_queue=620, util=88.41% 00:17:44.980 nvme0n4: ios=12/512, merge=0/0, ticks=457/328, in_queue=785, util=89.54% 00:17:44.980 20:31:37 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:17:44.980 [global] 00:17:44.980 thread=1 00:17:44.980 invalidate=1 00:17:44.980 rw=write 00:17:44.980 time_based=1 00:17:44.980 runtime=1 00:17:44.980 ioengine=libaio 00:17:44.980 direct=1 00:17:44.980 bs=4096 00:17:44.980 iodepth=128 00:17:44.980 norandommap=0 00:17:44.980 numjobs=1 00:17:44.980 00:17:44.980 verify_dump=1 00:17:44.980 verify_backlog=512 00:17:44.980 verify_state_save=0 00:17:44.980 do_verify=1 00:17:44.980 verify=crc32c-intel 00:17:44.980 [job0] 00:17:44.980 filename=/dev/nvme0n1 00:17:44.980 [job1] 00:17:44.980 filename=/dev/nvme0n2 00:17:44.980 [job2] 00:17:44.980 filename=/dev/nvme0n3 00:17:44.980 [job3] 00:17:44.980 filename=/dev/nvme0n4 00:17:44.980 Could not set queue depth (nvme0n1) 00:17:44.981 Could not set queue depth (nvme0n2) 00:17:44.981 Could not set queue depth (nvme0n3) 00:17:44.981 Could not set queue depth (nvme0n4) 00:17:45.241 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:45.241 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:45.241 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:45.241 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:45.241 fio-3.35 00:17:45.241 Starting 4 threads 00:17:46.638 00:17:46.638 job0: (groupid=0, jobs=1): err= 0: pid=1320952: Mon Jul 15 20:31:38 2024 00:17:46.638 read: IOPS=5773, BW=22.6MiB/s (23.6MB/s)(22.7MiB/1005msec) 00:17:46.638 slat (nsec): min=845, max=25912k, avg=86866.46, stdev=745016.95 00:17:46.638 clat (usec): min=1180, max=61041, avg=12130.19, stdev=7681.77 00:17:46.638 lat (usec): min=1205, max=61069, avg=12217.06, stdev=7741.78 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 2573], 5.00th=[ 4817], 10.00th=[ 6390], 20.00th=[ 7242], 00:17:46.638 | 30.00th=[ 7898], 40.00th=[ 9110], 50.00th=[ 9765], 60.00th=[10552], 00:17:46.638 | 70.00th=[12125], 80.00th=[15139], 90.00th=[22414], 95.00th=[28443], 00:17:46.638 | 99.00th=[45351], 99.50th=[45876], 99.90th=[45876], 99.95th=[45876], 00:17:46.638 | 99.99th=[61080] 00:17:46.638 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:17:46.638 slat (nsec): min=1532, max=9742.9k, avg=70601.03, stdev=439143.34 00:17:46.638 clat (usec): min=858, max=35781, avg=9303.60, stdev=4954.14 00:17:46.638 lat (usec): min=1051, max=35788, avg=9374.20, stdev=4974.45 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 2638], 5.00th=[ 4047], 10.00th=[ 5145], 20.00th=[ 5735], 00:17:46.638 | 30.00th=[ 6783], 40.00th=[ 7701], 50.00th=[ 8717], 60.00th=[ 9372], 00:17:46.638 | 70.00th=[ 9896], 80.00th=[11207], 90.00th=[12911], 95.00th=[19006], 00:17:46.638 | 99.00th=[32637], 99.50th=[34341], 99.90th=[35914], 99.95th=[35914], 00:17:46.638 | 99.99th=[35914] 00:17:46.638 bw ( KiB/s): min=24576, max=24576, per=26.85%, avg=24576.00, stdev= 0.00, samples=2 00:17:46.638 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=2 00:17:46.638 lat (usec) : 1000=0.01% 00:17:46.638 lat (msec) : 2=0.23%, 4=3.03%, 10=60.29%, 20=27.52%, 50=8.91% 00:17:46.638 lat (msec) : 100=0.02% 00:17:46.638 cpu : usr=2.89%, sys=6.87%, ctx=459, majf=0, minf=1 00:17:46.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:46.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.638 issued rwts: total=5802,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.638 job1: (groupid=0, jobs=1): err= 0: pid=1320953: Mon Jul 15 20:31:38 2024 00:17:46.638 read: IOPS=6616, BW=25.8MiB/s (27.1MB/s)(26.0MiB/1006msec) 00:17:46.638 slat (nsec): min=883, max=11441k, avg=74009.00, stdev=538316.39 00:17:46.638 clat (usec): min=1349, max=34571, avg=10087.89, stdev=4334.15 00:17:46.638 lat (usec): min=1356, max=34580, avg=10161.90, stdev=4378.00 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 2933], 5.00th=[ 5407], 10.00th=[ 6390], 20.00th=[ 7177], 00:17:46.638 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9503], 60.00th=[10290], 00:17:46.638 | 70.00th=[10814], 80.00th=[12387], 90.00th=[13829], 95.00th=[15795], 00:17:46.638 | 99.00th=[30540], 99.50th=[34341], 99.90th=[34341], 99.95th=[34341], 00:17:46.638 | 99.99th=[34341] 00:17:46.638 write: IOPS=7102, BW=27.7MiB/s (29.1MB/s)(27.9MiB/1006msec); 0 zone resets 00:17:46.638 slat (nsec): min=1600, max=10275k, avg=62987.55, stdev=464982.40 00:17:46.638 clat (usec): min=703, max=25212, avg=8470.06, stdev=3666.71 00:17:46.638 lat (usec): min=761, max=28913, avg=8533.05, stdev=3701.40 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 2737], 5.00th=[ 4359], 10.00th=[ 5014], 20.00th=[ 6128], 00:17:46.638 | 30.00th=[ 6783], 40.00th=[ 7177], 50.00th=[ 7635], 60.00th=[ 8291], 00:17:46.638 | 70.00th=[ 8848], 80.00th=[10028], 90.00th=[13304], 95.00th=[15926], 00:17:46.638 | 99.00th=[22152], 99.50th=[24249], 99.90th=[25297], 99.95th=[25297], 00:17:46.638 | 99.99th=[25297] 00:17:46.638 bw ( KiB/s): min=27464, max=28672, per=30.67%, avg=28068.00, stdev=854.18, samples=2 00:17:46.638 iops : min= 6866, max= 7168, avg=7017.00, stdev=213.55, samples=2 00:17:46.638 lat (usec) : 750=0.01%, 1000=0.09% 00:17:46.638 lat (msec) : 2=0.28%, 4=2.12%, 10=66.39%, 20=28.38%, 50=2.72% 00:17:46.638 cpu : usr=4.28%, sys=7.36%, ctx=451, majf=0, minf=1 00:17:46.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:46.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.638 issued rwts: total=6656,7145,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.638 job2: (groupid=0, jobs=1): err= 0: pid=1320954: Mon Jul 15 20:31:38 2024 00:17:46.638 read: IOPS=5085, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1005msec) 00:17:46.638 slat (nsec): min=923, max=14037k, avg=92202.10, stdev=644502.19 00:17:46.638 clat (usec): min=1508, max=62578, avg=11852.68, stdev=8269.57 00:17:46.638 lat (usec): min=1820, max=62585, avg=11944.88, stdev=8336.62 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 2573], 5.00th=[ 5342], 10.00th=[ 6456], 20.00th=[ 8029], 00:17:46.638 | 30.00th=[ 8586], 40.00th=[ 9241], 50.00th=[ 9765], 60.00th=[10028], 00:17:46.638 | 70.00th=[11207], 80.00th=[13960], 90.00th=[16909], 95.00th=[25560], 00:17:46.638 | 99.00th=[53740], 99.50th=[54789], 99.90th=[62653], 99.95th=[62653], 00:17:46.638 | 99.99th=[62653] 00:17:46.638 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:17:46.638 slat (nsec): min=1601, max=28097k, avg=86813.67, stdev=618361.36 00:17:46.638 clat (usec): min=527, max=61727, avg=12202.28, stdev=11576.74 00:17:46.638 lat (usec): min=530, max=61736, avg=12289.09, stdev=11641.54 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 1237], 5.00th=[ 2573], 10.00th=[ 4621], 20.00th=[ 5800], 00:17:46.638 | 30.00th=[ 6718], 40.00th=[ 7439], 50.00th=[ 8455], 60.00th=[ 9503], 00:17:46.638 | 70.00th=[10814], 80.00th=[15008], 90.00th=[23200], 95.00th=[43254], 00:17:46.638 | 99.00th=[56361], 99.50th=[57410], 99.90th=[61604], 99.95th=[61604], 00:17:46.638 | 99.99th=[61604] 00:17:46.638 bw ( KiB/s): min=16384, max=24576, per=22.38%, avg=20480.00, stdev=5792.62, samples=2 00:17:46.638 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:17:46.638 lat (usec) : 750=0.06%, 1000=0.10% 00:17:46.638 lat (msec) : 2=2.19%, 4=3.46%, 10=56.28%, 20=27.98%, 50=7.33% 00:17:46.638 lat (msec) : 100=2.60% 00:17:46.638 cpu : usr=3.39%, sys=5.18%, ctx=475, majf=0, minf=1 00:17:46.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:17:46.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.638 issued rwts: total=5111,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.638 job3: (groupid=0, jobs=1): err= 0: pid=1320955: Mon Jul 15 20:31:38 2024 00:17:46.638 read: IOPS=4259, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1006msec) 00:17:46.638 slat (nsec): min=889, max=20856k, avg=119366.01, stdev=885207.64 00:17:46.638 clat (usec): min=2542, max=54980, avg=15177.69, stdev=9304.52 00:17:46.638 lat (usec): min=5097, max=54982, avg=15297.06, stdev=9390.07 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 5342], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8291], 00:17:46.638 | 30.00th=[ 8979], 40.00th=[10290], 50.00th=[11076], 60.00th=[12780], 00:17:46.638 | 70.00th=[14877], 80.00th=[21627], 90.00th=[30016], 95.00th=[38011], 00:17:46.638 | 99.00th=[42730], 99.50th=[42730], 99.90th=[46400], 99.95th=[49546], 00:17:46.638 | 99.99th=[54789] 00:17:46.638 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:17:46.638 slat (nsec): min=1560, max=17459k, avg=100358.16, stdev=650090.86 00:17:46.638 clat (usec): min=770, max=48944, avg=13429.92, stdev=7950.40 00:17:46.638 lat (usec): min=778, max=48976, avg=13530.28, stdev=8007.69 00:17:46.638 clat percentiles (usec): 00:17:46.638 | 1.00th=[ 5014], 5.00th=[ 7046], 10.00th=[ 7963], 20.00th=[ 8586], 00:17:46.638 | 30.00th=[ 8848], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10945], 00:17:46.638 | 70.00th=[13566], 80.00th=[17957], 90.00th=[25035], 95.00th=[33424], 00:17:46.638 | 99.00th=[39060], 99.50th=[39060], 99.90th=[41157], 99.95th=[41157], 00:17:46.638 | 99.99th=[49021] 00:17:46.638 bw ( KiB/s): min=16384, max=20480, per=20.14%, avg=18432.00, stdev=2896.31, samples=2 00:17:46.638 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:17:46.638 lat (usec) : 1000=0.03% 00:17:46.638 lat (msec) : 2=0.08%, 4=0.01%, 10=45.95%, 20=34.49%, 50=19.42% 00:17:46.638 lat (msec) : 100=0.02% 00:17:46.638 cpu : usr=3.88%, sys=3.38%, ctx=402, majf=0, minf=1 00:17:46.638 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:17:46.638 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:46.638 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:46.638 issued rwts: total=4285,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:46.638 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:46.638 00:17:46.639 Run status group 0 (all jobs): 00:17:46.639 READ: bw=84.9MiB/s (89.0MB/s), 16.6MiB/s-25.8MiB/s (17.4MB/s-27.1MB/s), io=85.4MiB (89.5MB), run=1005-1006msec 00:17:46.639 WRITE: bw=89.4MiB/s (93.7MB/s), 17.9MiB/s-27.7MiB/s (18.8MB/s-29.1MB/s), io=89.9MiB (94.3MB), run=1005-1006msec 00:17:46.639 00:17:46.639 Disk stats (read/write): 00:17:46.639 nvme0n1: ios=5170/5632, merge=0/0, ticks=31733/24440, in_queue=56173, util=92.48% 00:17:46.639 nvme0n2: ios=5671/5735, merge=0/0, ticks=42970/35347, in_queue=78317, util=88.28% 00:17:46.639 nvme0n3: ios=3748/4096, merge=0/0, ticks=24969/32042, in_queue=57011, util=98.74% 00:17:46.639 nvme0n4: ios=3710/4096, merge=0/0, ticks=25022/27424, in_queue=52446, util=89.54% 00:17:46.639 20:31:38 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:17:46.639 [global] 00:17:46.639 thread=1 00:17:46.639 invalidate=1 00:17:46.639 rw=randwrite 00:17:46.639 time_based=1 00:17:46.639 runtime=1 00:17:46.639 ioengine=libaio 00:17:46.639 direct=1 00:17:46.639 bs=4096 00:17:46.639 iodepth=128 00:17:46.639 norandommap=0 00:17:46.639 numjobs=1 00:17:46.639 00:17:46.639 verify_dump=1 00:17:46.639 verify_backlog=512 00:17:46.639 verify_state_save=0 00:17:46.639 do_verify=1 00:17:46.639 verify=crc32c-intel 00:17:46.639 [job0] 00:17:46.639 filename=/dev/nvme0n1 00:17:46.639 [job1] 00:17:46.639 filename=/dev/nvme0n2 00:17:46.639 [job2] 00:17:46.639 filename=/dev/nvme0n3 00:17:46.639 [job3] 00:17:46.639 filename=/dev/nvme0n4 00:17:46.639 Could not set queue depth (nvme0n1) 00:17:46.639 Could not set queue depth (nvme0n2) 00:17:46.639 Could not set queue depth (nvme0n3) 00:17:46.639 Could not set queue depth (nvme0n4) 00:17:46.900 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:46.900 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:46.900 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:46.900 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:17:46.900 fio-3.35 00:17:46.900 Starting 4 threads 00:17:48.286 00:17:48.286 job0: (groupid=0, jobs=1): err= 0: pid=1321473: Mon Jul 15 20:31:40 2024 00:17:48.286 read: IOPS=6678, BW=26.1MiB/s (27.4MB/s)(26.2MiB/1006msec) 00:17:48.286 slat (nsec): min=881, max=35903k, avg=74531.06, stdev=650080.73 00:17:48.286 clat (usec): min=2253, max=84220, avg=9466.64, stdev=8897.19 00:17:48.286 lat (usec): min=2254, max=84227, avg=9541.17, stdev=8950.43 00:17:48.286 clat percentiles (usec): 00:17:48.286 | 1.00th=[ 4359], 5.00th=[ 4948], 10.00th=[ 5407], 20.00th=[ 5866], 00:17:48.286 | 30.00th=[ 6259], 40.00th=[ 6783], 50.00th=[ 7439], 60.00th=[ 7898], 00:17:48.286 | 70.00th=[ 9110], 80.00th=[10945], 90.00th=[13435], 95.00th=[16188], 00:17:48.286 | 99.00th=[71828], 99.50th=[71828], 99.90th=[76022], 99.95th=[76022], 00:17:48.286 | 99.99th=[84411] 00:17:48.286 write: IOPS=7125, BW=27.8MiB/s (29.2MB/s)(28.0MiB/1006msec); 0 zone resets 00:17:48.286 slat (nsec): min=1453, max=5737.2k, avg=65717.99, stdev=353939.20 00:17:48.286 clat (usec): min=1109, max=40088, avg=8929.69, stdev=6703.12 00:17:48.286 lat (usec): min=1117, max=40092, avg=8995.41, stdev=6745.79 00:17:48.286 clat percentiles (usec): 00:17:48.286 | 1.00th=[ 2343], 5.00th=[ 3425], 10.00th=[ 4080], 20.00th=[ 4948], 00:17:48.286 | 30.00th=[ 5473], 40.00th=[ 5866], 50.00th=[ 6325], 60.00th=[ 7177], 00:17:48.286 | 70.00th=[ 8356], 80.00th=[12125], 90.00th=[17433], 95.00th=[21890], 00:17:48.286 | 99.00th=[35390], 99.50th=[39060], 99.90th=[40109], 99.95th=[40109], 00:17:48.286 | 99.99th=[40109] 00:17:48.286 bw ( KiB/s): min=22408, max=34416, per=32.32%, avg=28412.00, stdev=8490.94, samples=2 00:17:48.286 iops : min= 5602, max= 8604, avg=7103.00, stdev=2122.73, samples=2 00:17:48.286 lat (msec) : 2=0.25%, 4=4.17%, 10=70.81%, 20=19.80%, 50=4.06% 00:17:48.286 lat (msec) : 100=0.91% 00:17:48.286 cpu : usr=4.58%, sys=5.97%, ctx=658, majf=0, minf=1 00:17:48.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:17:48.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.286 issued rwts: total=6719,7168,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.286 job1: (groupid=0, jobs=1): err= 0: pid=1321474: Mon Jul 15 20:31:40 2024 00:17:48.286 read: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec) 00:17:48.286 slat (nsec): min=931, max=11021k, avg=97518.92, stdev=643321.58 00:17:48.286 clat (usec): min=3930, max=63674, avg=11781.82, stdev=7254.61 00:17:48.286 lat (usec): min=3935, max=63677, avg=11879.34, stdev=7320.27 00:17:48.286 clat percentiles (usec): 00:17:48.286 | 1.00th=[ 5080], 5.00th=[ 5932], 10.00th=[ 6390], 20.00th=[ 6980], 00:17:48.286 | 30.00th=[ 7701], 40.00th=[ 8455], 50.00th=[ 9634], 60.00th=[11469], 00:17:48.286 | 70.00th=[12518], 80.00th=[14484], 90.00th=[18220], 95.00th=[26608], 00:17:48.286 | 99.00th=[45876], 99.50th=[54264], 99.90th=[61080], 99.95th=[63701], 00:17:48.286 | 99.99th=[63701] 00:17:48.286 write: IOPS=5504, BW=21.5MiB/s (22.5MB/s)(21.7MiB/1008msec); 0 zone resets 00:17:48.286 slat (nsec): min=1527, max=9185.8k, avg=85324.86, stdev=438460.94 00:17:48.286 clat (usec): min=1079, max=63676, avg=12182.87, stdev=7592.97 00:17:48.286 lat (usec): min=1087, max=63683, avg=12268.20, stdev=7627.77 00:17:48.286 clat percentiles (usec): 00:17:48.286 | 1.00th=[ 2868], 5.00th=[ 3982], 10.00th=[ 4555], 20.00th=[ 6194], 00:17:48.286 | 30.00th=[ 7504], 40.00th=[ 8717], 50.00th=[11338], 60.00th=[12256], 00:17:48.286 | 70.00th=[14353], 80.00th=[16712], 90.00th=[21365], 95.00th=[23987], 00:17:48.286 | 99.00th=[43779], 99.50th=[56886], 99.90th=[60556], 99.95th=[63701], 00:17:48.286 | 99.99th=[63701] 00:17:48.286 bw ( KiB/s): min=20480, max=22888, per=24.67%, avg=21684.00, stdev=1702.71, samples=2 00:17:48.286 iops : min= 5120, max= 5722, avg=5421.00, stdev=425.68, samples=2 00:17:48.286 lat (msec) : 2=0.19%, 4=2.62%, 10=46.50%, 20=39.39%, 50=10.57% 00:17:48.286 lat (msec) : 100=0.74% 00:17:48.286 cpu : usr=4.07%, sys=5.26%, ctx=515, majf=0, minf=1 00:17:48.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:17:48.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.286 issued rwts: total=5120,5549,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.286 job2: (groupid=0, jobs=1): err= 0: pid=1321482: Mon Jul 15 20:31:40 2024 00:17:48.286 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:17:48.286 slat (nsec): min=931, max=9565.2k, avg=81275.31, stdev=560337.41 00:17:48.286 clat (usec): min=2174, max=24398, avg=10130.26, stdev=3396.88 00:17:48.286 lat (usec): min=2176, max=24406, avg=10211.53, stdev=3444.65 00:17:48.286 clat percentiles (usec): 00:17:48.286 | 1.00th=[ 2769], 5.00th=[ 5800], 10.00th=[ 6652], 20.00th=[ 7701], 00:17:48.286 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 9110], 60.00th=[10945], 00:17:48.286 | 70.00th=[11863], 80.00th=[12780], 90.00th=[14615], 95.00th=[16319], 00:17:48.286 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22152], 99.95th=[22152], 00:17:48.286 | 99.99th=[24511] 00:17:48.286 write: IOPS=6349, BW=24.8MiB/s (26.0MB/s)(24.9MiB/1002msec); 0 zone resets 00:17:48.286 slat (nsec): min=1557, max=13583k, avg=78886.42, stdev=475704.21 00:17:48.286 clat (usec): min=548, max=65990, avg=10989.08, stdev=9806.24 00:17:48.286 lat (usec): min=555, max=65998, avg=11067.97, stdev=9860.61 00:17:48.286 clat percentiles (usec): 00:17:48.286 | 1.00th=[ 1336], 5.00th=[ 2966], 10.00th=[ 4686], 20.00th=[ 5997], 00:17:48.286 | 30.00th=[ 7308], 40.00th=[ 7570], 50.00th=[ 7832], 60.00th=[ 8848], 00:17:48.286 | 70.00th=[12125], 80.00th=[14091], 90.00th=[16188], 95.00th=[28967], 00:17:48.286 | 99.00th=[61080], 99.50th=[63701], 99.90th=[65799], 99.95th=[65799], 00:17:48.286 | 99.99th=[65799] 00:17:48.286 bw ( KiB/s): min=20480, max=29400, per=28.37%, avg=24940.00, stdev=6307.39, samples=2 00:17:48.286 iops : min= 5120, max= 7350, avg=6235.00, stdev=1576.85, samples=2 00:17:48.286 lat (usec) : 750=0.10%, 1000=0.08% 00:17:48.286 lat (msec) : 2=1.31%, 4=3.18%, 10=55.86%, 20=35.13%, 50=3.23% 00:17:48.286 lat (msec) : 100=1.12% 00:17:48.286 cpu : usr=3.60%, sys=5.29%, ctx=544, majf=0, minf=1 00:17:48.286 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:17:48.286 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.286 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.286 issued rwts: total=5632,6362,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.286 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.286 job3: (groupid=0, jobs=1): err= 0: pid=1321483: Mon Jul 15 20:31:40 2024 00:17:48.286 read: IOPS=2829, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1007msec) 00:17:48.287 slat (nsec): min=985, max=17290k, avg=231008.85, stdev=1352005.52 00:17:48.287 clat (msec): min=2, max=113, avg=26.81, stdev=22.36 00:17:48.287 lat (msec): min=7, max=113, avg=27.04, stdev=22.52 00:17:48.287 clat percentiles (msec): 00:17:48.287 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 11], 20.00th=[ 12], 00:17:48.287 | 30.00th=[ 13], 40.00th=[ 15], 50.00th=[ 19], 60.00th=[ 22], 00:17:48.287 | 70.00th=[ 28], 80.00th=[ 39], 90.00th=[ 61], 95.00th=[ 70], 00:17:48.287 | 99.00th=[ 113], 99.50th=[ 114], 99.90th=[ 114], 99.95th=[ 114], 00:17:48.287 | 99.99th=[ 114] 00:17:48.287 write: IOPS=3050, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1007msec); 0 zone resets 00:17:48.287 slat (nsec): min=1555, max=10258k, avg=106032.01, stdev=644530.07 00:17:48.287 clat (usec): min=4482, max=84750, avg=16521.78, stdev=10041.61 00:17:48.287 lat (usec): min=4490, max=84753, avg=16627.81, stdev=10064.17 00:17:48.287 clat percentiles (usec): 00:17:48.287 | 1.00th=[ 6194], 5.00th=[ 7570], 10.00th=[ 9241], 20.00th=[10421], 00:17:48.287 | 30.00th=[11207], 40.00th=[11994], 50.00th=[13304], 60.00th=[14615], 00:17:48.287 | 70.00th=[17433], 80.00th=[21890], 90.00th=[25035], 95.00th=[34341], 00:17:48.287 | 99.00th=[66323], 99.50th=[83362], 99.90th=[84411], 99.95th=[84411], 00:17:48.287 | 99.99th=[84411] 00:17:48.287 bw ( KiB/s): min= 8584, max=15992, per=13.98%, avg=12288.00, stdev=5238.25, samples=2 00:17:48.287 iops : min= 2146, max= 3998, avg=3072.00, stdev=1309.56, samples=2 00:17:48.287 lat (msec) : 4=0.02%, 10=11.94%, 20=53.67%, 50=27.04%, 100=6.00% 00:17:48.287 lat (msec) : 250=1.33% 00:17:48.287 cpu : usr=1.49%, sys=4.37%, ctx=255, majf=0, minf=1 00:17:48.287 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:17:48.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.287 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.287 issued rwts: total=2849,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.287 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.287 00:17:48.287 Run status group 0 (all jobs): 00:17:48.287 READ: bw=78.7MiB/s (82.6MB/s), 11.1MiB/s-26.1MiB/s (11.6MB/s-27.4MB/s), io=79.4MiB (83.2MB), run=1002-1008msec 00:17:48.287 WRITE: bw=85.8MiB/s (90.0MB/s), 11.9MiB/s-27.8MiB/s (12.5MB/s-29.2MB/s), io=86.5MiB (90.7MB), run=1002-1008msec 00:17:48.287 00:17:48.287 Disk stats (read/write): 00:17:48.287 nvme0n1: ios=6194/6543, merge=0/0, ticks=46732/43094, in_queue=89826, util=90.18% 00:17:48.287 nvme0n2: ios=4486/4608, merge=0/0, ticks=52459/50462, in_queue=102921, util=96.53% 00:17:48.287 nvme0n3: ios=4209/5120, merge=0/0, ticks=39598/53306, in_queue=92904, util=98.63% 00:17:48.287 nvme0n4: ios=2281/2560, merge=0/0, ticks=21422/12090, in_queue=33512, util=99.89% 00:17:48.287 20:31:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:17:48.287 20:31:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1321670 00:17:48.287 20:31:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:17:48.287 20:31:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:17:48.287 [global] 00:17:48.287 thread=1 00:17:48.287 invalidate=1 00:17:48.287 rw=read 00:17:48.287 time_based=1 00:17:48.287 runtime=10 00:17:48.287 ioengine=libaio 00:17:48.287 direct=1 00:17:48.287 bs=4096 00:17:48.287 iodepth=1 00:17:48.287 norandommap=1 00:17:48.287 numjobs=1 00:17:48.287 00:17:48.287 [job0] 00:17:48.287 filename=/dev/nvme0n1 00:17:48.287 [job1] 00:17:48.287 filename=/dev/nvme0n2 00:17:48.287 [job2] 00:17:48.287 filename=/dev/nvme0n3 00:17:48.287 [job3] 00:17:48.287 filename=/dev/nvme0n4 00:17:48.287 Could not set queue depth (nvme0n1) 00:17:48.287 Could not set queue depth (nvme0n2) 00:17:48.287 Could not set queue depth (nvme0n3) 00:17:48.287 Could not set queue depth (nvme0n4) 00:17:48.547 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:48.547 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:48.547 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:48.547 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:17:48.547 fio-3.35 00:17:48.547 Starting 4 threads 00:17:51.113 20:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:17:51.374 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=356352, buflen=4096 00:17:51.374 fio: pid=1322003, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:51.374 20:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:17:51.634 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=9478144, buflen=4096 00:17:51.634 fio: pid=1322002, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:51.634 20:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:51.634 20:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:17:51.634 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=8679424, buflen=4096 00:17:51.634 fio: pid=1322000, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:51.634 20:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:51.634 20:31:43 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:17:51.895 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1839104, buflen=4096 00:17:51.895 fio: pid=1322001, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:17:51.895 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:51.895 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:17:51.895 00:17:51.895 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1322000: Mon Jul 15 20:31:44 2024 00:17:51.895 read: IOPS=726, BW=2904KiB/s (2973kB/s)(8476KiB/2919msec) 00:17:51.895 slat (usec): min=5, max=16078, avg=48.80, stdev=578.58 00:17:51.895 clat (usec): min=538, max=41635, avg=1321.99, stdev=2402.54 00:17:51.895 lat (usec): min=563, max=41642, avg=1370.80, stdev=2470.11 00:17:51.895 clat percentiles (usec): 00:17:51.895 | 1.00th=[ 857], 5.00th=[ 1012], 10.00th=[ 1057], 20.00th=[ 1106], 00:17:51.895 | 30.00th=[ 1139], 40.00th=[ 1156], 50.00th=[ 1188], 60.00th=[ 1205], 00:17:51.895 | 70.00th=[ 1221], 80.00th=[ 1237], 90.00th=[ 1270], 95.00th=[ 1303], 00:17:51.895 | 99.00th=[ 1385], 99.50th=[ 1434], 99.90th=[41157], 99.95th=[41157], 00:17:51.895 | 99.99th=[41681] 00:17:51.895 bw ( KiB/s): min= 1824, max= 3344, per=45.22%, avg=2908.80, stdev=658.45, samples=5 00:17:51.895 iops : min= 456, max= 836, avg=727.20, stdev=164.61, samples=5 00:17:51.895 lat (usec) : 750=0.61%, 1000=3.87% 00:17:51.895 lat (msec) : 2=95.05%, 50=0.42% 00:17:51.895 cpu : usr=0.55%, sys=2.30%, ctx=2124, majf=0, minf=1 00:17:51.895 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:51.895 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.895 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.895 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.895 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:51.895 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1322001: Mon Jul 15 20:31:44 2024 00:17:51.895 read: IOPS=145, BW=581KiB/s (595kB/s)(1796KiB/3091msec) 00:17:51.895 slat (usec): min=6, max=15023, avg=180.63, stdev=1392.29 00:17:51.895 clat (usec): min=298, max=42142, avg=6695.61, stdev=14031.34 00:17:51.895 lat (usec): min=322, max=42167, avg=6876.63, stdev=14037.69 00:17:51.895 clat percentiles (usec): 00:17:51.895 | 1.00th=[ 734], 5.00th=[ 865], 10.00th=[ 914], 20.00th=[ 996], 00:17:51.895 | 30.00th=[ 1045], 40.00th=[ 1090], 50.00th=[ 1123], 60.00th=[ 1139], 00:17:51.895 | 70.00th=[ 1172], 80.00th=[ 1221], 90.00th=[41681], 95.00th=[42206], 00:17:51.895 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:51.895 | 99.99th=[42206] 00:17:51.896 bw ( KiB/s): min= 96, max= 2639, per=8.10%, avg=521.17, stdev=1037.53, samples=6 00:17:51.896 iops : min= 24, max= 659, avg=130.17, stdev=259.08, samples=6 00:17:51.896 lat (usec) : 500=0.22%, 750=1.11%, 1000=19.33% 00:17:51.896 lat (msec) : 2=65.11%, 10=0.22%, 50=13.78% 00:17:51.896 cpu : usr=0.10%, sys=0.49%, ctx=456, majf=0, minf=1 00:17:51.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:51.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.896 complete : 0=0.2%, 4=99.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.896 issued rwts: total=450,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:51.896 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1322002: Mon Jul 15 20:31:44 2024 00:17:51.896 read: IOPS=833, BW=3333KiB/s (3413kB/s)(9256KiB/2777msec) 00:17:51.896 slat (usec): min=5, max=19703, avg=36.31, stdev=468.58 00:17:51.896 clat (usec): min=363, max=42134, avg=1156.37, stdev=4025.16 00:17:51.896 lat (usec): min=374, max=42162, avg=1192.69, stdev=4051.88 00:17:51.896 clat percentiles (usec): 00:17:51.896 | 1.00th=[ 420], 5.00th=[ 498], 10.00th=[ 562], 20.00th=[ 603], 00:17:51.896 | 30.00th=[ 635], 40.00th=[ 685], 50.00th=[ 766], 60.00th=[ 824], 00:17:51.896 | 70.00th=[ 865], 80.00th=[ 898], 90.00th=[ 947], 95.00th=[ 979], 00:17:51.896 | 99.00th=[ 8586], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:17:51.896 | 99.99th=[42206] 00:17:51.896 bw ( KiB/s): min= 600, max= 5376, per=55.41%, avg=3563.20, stdev=2228.33, samples=5 00:17:51.896 iops : min= 150, max= 1344, avg=890.80, stdev=557.08, samples=5 00:17:51.896 lat (usec) : 500=5.23%, 750=42.33%, 1000=49.16% 00:17:51.896 lat (msec) : 2=2.16%, 10=0.09%, 50=0.99% 00:17:51.896 cpu : usr=0.72%, sys=2.38%, ctx=2317, majf=0, minf=1 00:17:51.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:51.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.896 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.896 issued rwts: total=2315,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:51.896 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=1322003: Mon Jul 15 20:31:44 2024 00:17:51.896 read: IOPS=34, BW=135KiB/s (138kB/s)(348KiB/2586msec) 00:17:51.896 slat (nsec): min=7101, max=68694, avg=24574.97, stdev=6374.22 00:17:51.896 clat (usec): min=678, max=42145, avg=29676.17, stdev=18898.16 00:17:51.896 lat (usec): min=703, max=42169, avg=29700.74, stdev=18898.26 00:17:51.896 clat percentiles (usec): 00:17:51.896 | 1.00th=[ 676], 5.00th=[ 799], 10.00th=[ 873], 20.00th=[ 938], 00:17:51.896 | 30.00th=[41157], 40.00th=[41681], 50.00th=[41681], 60.00th=[42206], 00:17:51.896 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:17:51.896 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:17:51.896 | 99.99th=[42206] 00:17:51.896 bw ( KiB/s): min= 96, max= 280, per=2.11%, avg=136.00, stdev=80.80, samples=5 00:17:51.896 iops : min= 24, max= 70, avg=34.00, stdev=20.20, samples=5 00:17:51.896 lat (usec) : 750=3.41%, 1000=23.86% 00:17:51.896 lat (msec) : 2=2.27%, 50=69.32% 00:17:51.896 cpu : usr=0.00%, sys=0.12%, ctx=89, majf=0, minf=2 00:17:51.896 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:17:51.896 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.896 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:51.896 issued rwts: total=88,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:51.896 latency : target=0, window=0, percentile=100.00%, depth=1 00:17:51.896 00:17:51.896 Run status group 0 (all jobs): 00:17:51.896 READ: bw=6430KiB/s (6585kB/s), 135KiB/s-3333KiB/s (138kB/s-3413kB/s), io=19.4MiB (20.4MB), run=2586-3091msec 00:17:51.896 00:17:51.896 Disk stats (read/write): 00:17:51.896 nvme0n1: ios=2052/0, merge=0/0, ticks=2639/0, in_queue=2639, util=93.16% 00:17:51.896 nvme0n2: ios=447/0, merge=0/0, ticks=2991/0, in_queue=2991, util=93.62% 00:17:51.896 nvme0n3: ios=2240/0, merge=0/0, ticks=2434/0, in_queue=2434, util=96.03% 00:17:51.896 nvme0n4: ios=81/0, merge=0/0, ticks=2332/0, in_queue=2332, util=96.02% 00:17:52.157 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:52.157 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:17:52.157 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:52.157 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:17:52.418 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:52.418 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:17:52.678 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:17:52.678 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:17:52.678 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:17:52.678 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 1321670 00:17:52.678 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:17:52.678 20:31:44 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:52.939 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:17:52.939 nvmf hotplug test: fio failed as expected 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:52.939 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:52.939 rmmod nvme_tcp 00:17:52.939 rmmod nvme_fabrics 00:17:52.939 rmmod nvme_keyring 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 1318140 ']' 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 1318140 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 1318140 ']' 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 1318140 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1318140 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1318140' 00:17:53.201 killing process with pid 1318140 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 1318140 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 1318140 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:53.201 20:31:45 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.750 20:31:47 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:55.750 00:17:55.750 real 0m29.255s 00:17:55.750 user 2m39.327s 00:17:55.750 sys 0m9.609s 00:17:55.750 20:31:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:55.750 20:31:47 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.750 ************************************ 00:17:55.750 END TEST nvmf_fio_target 00:17:55.750 ************************************ 00:17:55.750 20:31:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:55.750 20:31:47 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:55.750 20:31:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:55.750 20:31:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:55.750 20:31:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:55.750 ************************************ 00:17:55.750 START TEST nvmf_bdevio 00:17:55.750 ************************************ 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:17:55.750 * Looking for test storage... 00:17:55.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:17:55.750 20:31:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:03.912 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:03.912 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:03.912 Found net devices under 0000:31:00.0: cvl_0_0 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:03.912 Found net devices under 0000:31:00.1: cvl_0_1 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:03.912 20:31:55 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:03.912 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:03.912 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:03.912 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:03.912 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.531 ms 00:18:03.912 00:18:03.912 --- 10.0.0.2 ping statistics --- 00:18:03.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.912 rtt min/avg/max/mdev = 0.531/0.531/0.531/0.000 ms 00:18:03.912 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:03.912 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:03.912 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.348 ms 00:18:03.912 00:18:03.912 --- 10.0.0.1 ping statistics --- 00:18:03.912 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:03.912 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=1327488 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 1327488 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 1327488 ']' 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:03.913 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:03.913 [2024-07-15 20:31:56.131620] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:18:03.913 [2024-07-15 20:31:56.131684] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:03.913 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.913 [2024-07-15 20:31:56.230083] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:04.214 [2024-07-15 20:31:56.322097] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.214 [2024-07-15 20:31:56.322154] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.214 [2024-07-15 20:31:56.322162] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.214 [2024-07-15 20:31:56.322169] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.214 [2024-07-15 20:31:56.322175] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.214 [2024-07-15 20:31:56.322381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:04.214 [2024-07-15 20:31:56.322666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:04.214 [2024-07-15 20:31:56.322828] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:04.214 [2024-07-15 20:31:56.322830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:04.784 [2024-07-15 20:31:56.985495] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.784 20:31:56 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:04.784 Malloc0 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:04.784 [2024-07-15 20:31:57.050506] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:04.784 { 00:18:04.784 "params": { 00:18:04.784 "name": "Nvme$subsystem", 00:18:04.784 "trtype": "$TEST_TRANSPORT", 00:18:04.784 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:04.784 "adrfam": "ipv4", 00:18:04.784 "trsvcid": "$NVMF_PORT", 00:18:04.784 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:04.784 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:04.784 "hdgst": ${hdgst:-false}, 00:18:04.784 "ddgst": ${ddgst:-false} 00:18:04.784 }, 00:18:04.784 "method": "bdev_nvme_attach_controller" 00:18:04.784 } 00:18:04.784 EOF 00:18:04.784 )") 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:18:04.784 20:31:57 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:04.784 "params": { 00:18:04.784 "name": "Nvme1", 00:18:04.784 "trtype": "tcp", 00:18:04.784 "traddr": "10.0.0.2", 00:18:04.784 "adrfam": "ipv4", 00:18:04.784 "trsvcid": "4420", 00:18:04.784 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.784 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.784 "hdgst": false, 00:18:04.784 "ddgst": false 00:18:04.784 }, 00:18:04.784 "method": "bdev_nvme_attach_controller" 00:18:04.784 }' 00:18:04.784 [2024-07-15 20:31:57.108346] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:18:04.784 [2024-07-15 20:31:57.108415] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1327740 ] 00:18:04.784 EAL: No free 2048 kB hugepages reported on node 1 00:18:05.077 [2024-07-15 20:31:57.185468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:05.077 [2024-07-15 20:31:57.260362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.077 [2024-07-15 20:31:57.260491] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:05.077 [2024-07-15 20:31:57.260494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:05.377 I/O targets: 00:18:05.377 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:05.377 00:18:05.377 00:18:05.377 CUnit - A unit testing framework for C - Version 2.1-3 00:18:05.377 http://cunit.sourceforge.net/ 00:18:05.377 00:18:05.377 00:18:05.377 Suite: bdevio tests on: Nvme1n1 00:18:05.377 Test: blockdev write read block ...passed 00:18:05.377 Test: blockdev write zeroes read block ...passed 00:18:05.377 Test: blockdev write zeroes read no split ...passed 00:18:05.377 Test: blockdev write zeroes read split ...passed 00:18:05.377 Test: blockdev write zeroes read split partial ...passed 00:18:05.377 Test: blockdev reset ...[2024-07-15 20:31:57.729439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:05.377 [2024-07-15 20:31:57.729504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x252d370 (9): Bad file descriptor 00:18:05.669 [2024-07-15 20:31:57.871028] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:05.669 passed 00:18:05.669 Test: blockdev write read 8 blocks ...passed 00:18:05.669 Test: blockdev write read size > 128k ...passed 00:18:05.669 Test: blockdev write read invalid size ...passed 00:18:05.669 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:05.669 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:05.669 Test: blockdev write read max offset ...passed 00:18:05.930 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:05.930 Test: blockdev writev readv 8 blocks ...passed 00:18:05.930 Test: blockdev writev readv 30 x 1block ...passed 00:18:05.930 Test: blockdev writev readv block ...passed 00:18:05.930 Test: blockdev writev readv size > 128k ...passed 00:18:05.930 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:05.930 Test: blockdev comparev and writev ...[2024-07-15 20:31:58.093945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.093971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.093983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.093989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.094344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.094352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.094362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.094368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.094751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.094759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.094772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.094778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.095178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.095186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.095196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:05.930 [2024-07-15 20:31:58.095201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:05.930 passed 00:18:05.930 Test: blockdev nvme passthru rw ...passed 00:18:05.930 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:31:58.179852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:05.930 [2024-07-15 20:31:58.179864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.180142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:05.930 [2024-07-15 20:31:58.180150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.180398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:05.930 [2024-07-15 20:31:58.180405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:05.930 [2024-07-15 20:31:58.180669] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:05.930 [2024-07-15 20:31:58.180675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:05.930 passed 00:18:05.930 Test: blockdev nvme admin passthru ...passed 00:18:05.930 Test: blockdev copy ...passed 00:18:05.930 00:18:05.930 Run Summary: Type Total Ran Passed Failed Inactive 00:18:05.930 suites 1 1 n/a 0 0 00:18:05.930 tests 23 23 23 0 0 00:18:05.930 asserts 152 152 152 0 n/a 00:18:05.930 00:18:05.930 Elapsed time = 1.453 seconds 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:06.191 rmmod nvme_tcp 00:18:06.191 rmmod nvme_fabrics 00:18:06.191 rmmod nvme_keyring 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 1327488 ']' 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 1327488 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 1327488 ']' 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 1327488 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1327488 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1327488' 00:18:06.191 killing process with pid 1327488 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 1327488 00:18:06.191 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 1327488 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:06.452 20:31:58 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.364 20:32:00 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:08.364 00:18:08.364 real 0m13.002s 00:18:08.364 user 0m14.062s 00:18:08.364 sys 0m6.680s 00:18:08.364 20:32:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:08.364 20:32:00 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:18:08.364 ************************************ 00:18:08.364 END TEST nvmf_bdevio 00:18:08.364 ************************************ 00:18:08.364 20:32:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:08.364 20:32:00 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:08.364 20:32:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:08.364 20:32:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:08.364 20:32:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:08.625 ************************************ 00:18:08.625 START TEST nvmf_auth_target 00:18:08.625 ************************************ 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:08.625 * Looking for test storage... 00:18:08.625 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.625 20:32:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:08.626 20:32:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:16.768 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:18:16.769 Found 0000:31:00.0 (0x8086 - 0x159b) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:18:16.769 Found 0000:31:00.1 (0x8086 - 0x159b) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:18:16.769 Found net devices under 0000:31:00.0: cvl_0_0 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:18:16.769 Found net devices under 0000:31:00.1: cvl_0_1 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:16.769 20:32:08 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:16.769 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:16.769 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:16.769 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:16.769 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:17.031 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:17.031 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.710 ms 00:18:17.031 00:18:17.031 --- 10.0.0.2 ping statistics --- 00:18:17.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.031 rtt min/avg/max/mdev = 0.710/0.710/0.710/0.000 ms 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:17.031 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:17.031 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:18:17.031 00:18:17.031 --- 10.0.0.1 ping statistics --- 00:18:17.031 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:17.031 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1332759 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1332759 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1332759 ']' 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:17.031 20:32:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=1332812 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.975 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a60c4330d25a06103fa17971b63c9a1d66ff29b8a2be2e51 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZED 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a60c4330d25a06103fa17971b63c9a1d66ff29b8a2be2e51 0 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a60c4330d25a06103fa17971b63c9a1d66ff29b8a2be2e51 0 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a60c4330d25a06103fa17971b63c9a1d66ff29b8a2be2e51 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZED 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZED 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.ZED 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a3b7702d6b19c71df009a0336a1c147eec11a6ec00b56d57636c62886297052d 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.zyI 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a3b7702d6b19c71df009a0336a1c147eec11a6ec00b56d57636c62886297052d 3 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a3b7702d6b19c71df009a0336a1c147eec11a6ec00b56d57636c62886297052d 3 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a3b7702d6b19c71df009a0336a1c147eec11a6ec00b56d57636c62886297052d 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.zyI 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.zyI 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.zyI 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1d4a11f543d301e2c7f24db39c6fb299 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Bn8 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1d4a11f543d301e2c7f24db39c6fb299 1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1d4a11f543d301e2c7f24db39c6fb299 1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1d4a11f543d301e2c7f24db39c6fb299 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Bn8 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Bn8 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Bn8 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c59256f05538a344266e084b35f4615c56d064e201688ac7 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.KCJ 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c59256f05538a344266e084b35f4615c56d064e201688ac7 2 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c59256f05538a344266e084b35f4615c56d064e201688ac7 2 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c59256f05538a344266e084b35f4615c56d064e201688ac7 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.KCJ 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.KCJ 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.KCJ 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1f745b6e3531e901ae5d3cb90d04ca8b4462fbcd70fceb75 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.xSG 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1f745b6e3531e901ae5d3cb90d04ca8b4462fbcd70fceb75 2 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1f745b6e3531e901ae5d3cb90d04ca8b4462fbcd70fceb75 2 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1f745b6e3531e901ae5d3cb90d04ca8b4462fbcd70fceb75 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:18:17.976 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.xSG 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.xSG 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.xSG 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=42bd9716903039a99ec205c55af8d1ea 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rvi 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 42bd9716903039a99ec205c55af8d1ea 1 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 42bd9716903039a99ec205c55af8d1ea 1 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=42bd9716903039a99ec205c55af8d1ea 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rvi 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rvi 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rvi 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8ec01973b5691154f56f83f27d065b129224554b6b5c6088087d6006daafacb3 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Cl0 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8ec01973b5691154f56f83f27d065b129224554b6b5c6088087d6006daafacb3 3 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8ec01973b5691154f56f83f27d065b129224554b6b5c6088087d6006daafacb3 3 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8ec01973b5691154f56f83f27d065b129224554b6b5c6088087d6006daafacb3 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Cl0 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Cl0 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Cl0 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 1332759 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1332759 ']' 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.238 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 1332812 /var/tmp/host.sock 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1332812 ']' 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:18.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.ZED 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.ZED 00:18:18.513 20:32:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.ZED 00:18:18.773 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.zyI ]] 00:18:18.773 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zyI 00:18:18.773 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.773 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.773 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.773 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zyI 00:18:18.773 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.zyI 00:18:19.033 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:19.033 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Bn8 00:18:19.033 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.033 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.033 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Bn8 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Bn8 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.KCJ ]] 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KCJ 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KCJ 00:18:19.034 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.KCJ 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.xSG 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.xSG 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.xSG 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rvi ]] 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rvi 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rvi 00:18:19.294 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rvi 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Cl0 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Cl0 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Cl0 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:19.555 20:32:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:19.815 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.816 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.076 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.076 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:20.076 { 00:18:20.076 "cntlid": 1, 00:18:20.076 "qid": 0, 00:18:20.076 "state": "enabled", 00:18:20.076 "thread": "nvmf_tgt_poll_group_000", 00:18:20.076 "listen_address": { 00:18:20.076 "trtype": "TCP", 00:18:20.076 "adrfam": "IPv4", 00:18:20.076 "traddr": "10.0.0.2", 00:18:20.076 "trsvcid": "4420" 00:18:20.076 }, 00:18:20.076 "peer_address": { 00:18:20.076 "trtype": "TCP", 00:18:20.076 "adrfam": "IPv4", 00:18:20.076 "traddr": "10.0.0.1", 00:18:20.076 "trsvcid": "35902" 00:18:20.076 }, 00:18:20.076 "auth": { 00:18:20.076 "state": "completed", 00:18:20.076 "digest": "sha256", 00:18:20.076 "dhgroup": "null" 00:18:20.076 } 00:18:20.076 } 00:18:20.076 ]' 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.392 20:32:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.349 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.609 00:18:21.609 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:21.609 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:21.609 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:21.609 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:21.609 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:21.609 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.609 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.868 20:32:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.868 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:21.868 { 00:18:21.868 "cntlid": 3, 00:18:21.868 "qid": 0, 00:18:21.868 "state": "enabled", 00:18:21.868 "thread": "nvmf_tgt_poll_group_000", 00:18:21.868 "listen_address": { 00:18:21.868 "trtype": "TCP", 00:18:21.868 "adrfam": "IPv4", 00:18:21.868 "traddr": "10.0.0.2", 00:18:21.868 "trsvcid": "4420" 00:18:21.868 }, 00:18:21.868 "peer_address": { 00:18:21.868 "trtype": "TCP", 00:18:21.868 "adrfam": "IPv4", 00:18:21.868 "traddr": "10.0.0.1", 00:18:21.868 "trsvcid": "33836" 00:18:21.868 }, 00:18:21.868 "auth": { 00:18:21.868 "state": "completed", 00:18:21.868 "digest": "sha256", 00:18:21.868 "dhgroup": "null" 00:18:21.868 } 00:18:21.868 } 00:18:21.868 ]' 00:18:21.868 20:32:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:21.868 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.868 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:21.868 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:21.868 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:21.868 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.868 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.868 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.129 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:18:22.703 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:22.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:22.703 20:32:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:22.703 20:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.703 20:32:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.703 20:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.703 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:22.703 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:22.703 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:22.964 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.225 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:23.225 { 00:18:23.225 "cntlid": 5, 00:18:23.225 "qid": 0, 00:18:23.225 "state": "enabled", 00:18:23.225 "thread": "nvmf_tgt_poll_group_000", 00:18:23.225 "listen_address": { 00:18:23.225 "trtype": "TCP", 00:18:23.225 "adrfam": "IPv4", 00:18:23.225 "traddr": "10.0.0.2", 00:18:23.225 "trsvcid": "4420" 00:18:23.225 }, 00:18:23.225 "peer_address": { 00:18:23.225 "trtype": "TCP", 00:18:23.225 "adrfam": "IPv4", 00:18:23.225 "traddr": "10.0.0.1", 00:18:23.225 "trsvcid": "33876" 00:18:23.225 }, 00:18:23.225 "auth": { 00:18:23.225 "state": "completed", 00:18:23.225 "digest": "sha256", 00:18:23.225 "dhgroup": "null" 00:18:23.225 } 00:18:23.225 } 00:18:23.225 ]' 00:18:23.225 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.504 20:32:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.448 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:24.709 00:18:24.709 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:24.709 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:24.709 20:32:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:24.971 { 00:18:24.971 "cntlid": 7, 00:18:24.971 "qid": 0, 00:18:24.971 "state": "enabled", 00:18:24.971 "thread": "nvmf_tgt_poll_group_000", 00:18:24.971 "listen_address": { 00:18:24.971 "trtype": "TCP", 00:18:24.971 "adrfam": "IPv4", 00:18:24.971 "traddr": "10.0.0.2", 00:18:24.971 "trsvcid": "4420" 00:18:24.971 }, 00:18:24.971 "peer_address": { 00:18:24.971 "trtype": "TCP", 00:18:24.971 "adrfam": "IPv4", 00:18:24.971 "traddr": "10.0.0.1", 00:18:24.971 "trsvcid": "33898" 00:18:24.971 }, 00:18:24.971 "auth": { 00:18:24.971 "state": "completed", 00:18:24.971 "digest": "sha256", 00:18:24.971 "dhgroup": "null" 00:18:24.971 } 00:18:24.971 } 00:18:24.971 ]' 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.971 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.236 20:32:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:25.809 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.071 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.331 00:18:26.331 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:26.331 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:26.331 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:26.592 { 00:18:26.592 "cntlid": 9, 00:18:26.592 "qid": 0, 00:18:26.592 "state": "enabled", 00:18:26.592 "thread": "nvmf_tgt_poll_group_000", 00:18:26.592 "listen_address": { 00:18:26.592 "trtype": "TCP", 00:18:26.592 "adrfam": "IPv4", 00:18:26.592 "traddr": "10.0.0.2", 00:18:26.592 "trsvcid": "4420" 00:18:26.592 }, 00:18:26.592 "peer_address": { 00:18:26.592 "trtype": "TCP", 00:18:26.592 "adrfam": "IPv4", 00:18:26.592 "traddr": "10.0.0.1", 00:18:26.592 "trsvcid": "33928" 00:18:26.592 }, 00:18:26.592 "auth": { 00:18:26.592 "state": "completed", 00:18:26.592 "digest": "sha256", 00:18:26.592 "dhgroup": "ffdhe2048" 00:18:26.592 } 00:18:26.592 } 00:18:26.592 ]' 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:26.592 20:32:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:26.853 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:27.426 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.426 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.687 20:32:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:27.949 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:27.949 { 00:18:27.949 "cntlid": 11, 00:18:27.949 "qid": 0, 00:18:27.949 "state": "enabled", 00:18:27.949 "thread": "nvmf_tgt_poll_group_000", 00:18:27.949 "listen_address": { 00:18:27.949 "trtype": "TCP", 00:18:27.949 "adrfam": "IPv4", 00:18:27.949 "traddr": "10.0.0.2", 00:18:27.949 "trsvcid": "4420" 00:18:27.949 }, 00:18:27.949 "peer_address": { 00:18:27.949 "trtype": "TCP", 00:18:27.949 "adrfam": "IPv4", 00:18:27.949 "traddr": "10.0.0.1", 00:18:27.949 "trsvcid": "33954" 00:18:27.949 }, 00:18:27.949 "auth": { 00:18:27.949 "state": "completed", 00:18:27.949 "digest": "sha256", 00:18:27.949 "dhgroup": "ffdhe2048" 00:18:27.949 } 00:18:27.949 } 00:18:27.949 ]' 00:18:27.949 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:28.210 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.210 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:28.210 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:28.210 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:28.210 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:28.210 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:28.210 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:28.471 20:32:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.044 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.305 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.566 00:18:29.566 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:29.566 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:29.566 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:29.566 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:29.566 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:29.566 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:29.567 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.567 20:32:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:29.567 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:29.567 { 00:18:29.567 "cntlid": 13, 00:18:29.567 "qid": 0, 00:18:29.567 "state": "enabled", 00:18:29.567 "thread": "nvmf_tgt_poll_group_000", 00:18:29.567 "listen_address": { 00:18:29.567 "trtype": "TCP", 00:18:29.567 "adrfam": "IPv4", 00:18:29.567 "traddr": "10.0.0.2", 00:18:29.567 "trsvcid": "4420" 00:18:29.567 }, 00:18:29.567 "peer_address": { 00:18:29.567 "trtype": "TCP", 00:18:29.567 "adrfam": "IPv4", 00:18:29.567 "traddr": "10.0.0.1", 00:18:29.567 "trsvcid": "33978" 00:18:29.567 }, 00:18:29.567 "auth": { 00:18:29.567 "state": "completed", 00:18:29.567 "digest": "sha256", 00:18:29.567 "dhgroup": "ffdhe2048" 00:18:29.567 } 00:18:29.567 } 00:18:29.567 ]' 00:18:29.567 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:29.567 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:29.567 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:29.827 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:29.828 20:32:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:29.828 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.828 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.828 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.828 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:30.772 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.772 20:32:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:30.773 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:31.039 00:18:31.039 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:31.039 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.039 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:31.302 { 00:18:31.302 "cntlid": 15, 00:18:31.302 "qid": 0, 00:18:31.302 "state": "enabled", 00:18:31.302 "thread": "nvmf_tgt_poll_group_000", 00:18:31.302 "listen_address": { 00:18:31.302 "trtype": "TCP", 00:18:31.302 "adrfam": "IPv4", 00:18:31.302 "traddr": "10.0.0.2", 00:18:31.302 "trsvcid": "4420" 00:18:31.302 }, 00:18:31.302 "peer_address": { 00:18:31.302 "trtype": "TCP", 00:18:31.302 "adrfam": "IPv4", 00:18:31.302 "traddr": "10.0.0.1", 00:18:31.302 "trsvcid": "58968" 00:18:31.302 }, 00:18:31.302 "auth": { 00:18:31.302 "state": "completed", 00:18:31.302 "digest": "sha256", 00:18:31.302 "dhgroup": "ffdhe2048" 00:18:31.302 } 00:18:31.302 } 00:18:31.302 ]' 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:31.302 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:31.564 20:32:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:18:32.135 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.135 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.135 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:32.135 20:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.136 20:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.136 20:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.136 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.136 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:32.136 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:32.136 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.396 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:32.656 00:18:32.657 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:32.657 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:32.657 20:32:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:32.918 { 00:18:32.918 "cntlid": 17, 00:18:32.918 "qid": 0, 00:18:32.918 "state": "enabled", 00:18:32.918 "thread": "nvmf_tgt_poll_group_000", 00:18:32.918 "listen_address": { 00:18:32.918 "trtype": "TCP", 00:18:32.918 "adrfam": "IPv4", 00:18:32.918 "traddr": "10.0.0.2", 00:18:32.918 "trsvcid": "4420" 00:18:32.918 }, 00:18:32.918 "peer_address": { 00:18:32.918 "trtype": "TCP", 00:18:32.918 "adrfam": "IPv4", 00:18:32.918 "traddr": "10.0.0.1", 00:18:32.918 "trsvcid": "58994" 00:18:32.918 }, 00:18:32.918 "auth": { 00:18:32.918 "state": "completed", 00:18:32.918 "digest": "sha256", 00:18:32.918 "dhgroup": "ffdhe3072" 00:18:32.918 } 00:18:32.918 } 00:18:32.918 ]' 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.918 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:33.179 20:32:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:33.751 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.012 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.322 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:34.322 { 00:18:34.322 "cntlid": 19, 00:18:34.322 "qid": 0, 00:18:34.322 "state": "enabled", 00:18:34.322 "thread": "nvmf_tgt_poll_group_000", 00:18:34.322 "listen_address": { 00:18:34.322 "trtype": "TCP", 00:18:34.322 "adrfam": "IPv4", 00:18:34.322 "traddr": "10.0.0.2", 00:18:34.322 "trsvcid": "4420" 00:18:34.322 }, 00:18:34.322 "peer_address": { 00:18:34.322 "trtype": "TCP", 00:18:34.322 "adrfam": "IPv4", 00:18:34.322 "traddr": "10.0.0.1", 00:18:34.322 "trsvcid": "59012" 00:18:34.322 }, 00:18:34.322 "auth": { 00:18:34.322 "state": "completed", 00:18:34.322 "digest": "sha256", 00:18:34.322 "dhgroup": "ffdhe3072" 00:18:34.322 } 00:18:34.322 } 00:18:34.322 ]' 00:18:34.322 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.606 20:32:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:35.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.547 20:32:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:35.807 00:18:35.807 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:35.807 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.807 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:36.066 { 00:18:36.066 "cntlid": 21, 00:18:36.066 "qid": 0, 00:18:36.066 "state": "enabled", 00:18:36.066 "thread": "nvmf_tgt_poll_group_000", 00:18:36.066 "listen_address": { 00:18:36.066 "trtype": "TCP", 00:18:36.066 "adrfam": "IPv4", 00:18:36.066 "traddr": "10.0.0.2", 00:18:36.066 "trsvcid": "4420" 00:18:36.066 }, 00:18:36.066 "peer_address": { 00:18:36.066 "trtype": "TCP", 00:18:36.066 "adrfam": "IPv4", 00:18:36.066 "traddr": "10.0.0.1", 00:18:36.066 "trsvcid": "59032" 00:18:36.066 }, 00:18:36.066 "auth": { 00:18:36.066 "state": "completed", 00:18:36.066 "digest": "sha256", 00:18:36.066 "dhgroup": "ffdhe3072" 00:18:36.066 } 00:18:36.066 } 00:18:36.066 ]' 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:36.066 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:36.067 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:36.327 20:32:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:18:36.897 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.897 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:36.897 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:36.897 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.898 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:36.898 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:36.898 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:36.898 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.159 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:37.419 00:18:37.419 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:37.419 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:37.419 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.679 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.679 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.679 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:37.679 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.679 20:32:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:37.679 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:37.679 { 00:18:37.679 "cntlid": 23, 00:18:37.679 "qid": 0, 00:18:37.679 "state": "enabled", 00:18:37.679 "thread": "nvmf_tgt_poll_group_000", 00:18:37.679 "listen_address": { 00:18:37.679 "trtype": "TCP", 00:18:37.679 "adrfam": "IPv4", 00:18:37.679 "traddr": "10.0.0.2", 00:18:37.679 "trsvcid": "4420" 00:18:37.679 }, 00:18:37.679 "peer_address": { 00:18:37.679 "trtype": "TCP", 00:18:37.679 "adrfam": "IPv4", 00:18:37.679 "traddr": "10.0.0.1", 00:18:37.679 "trsvcid": "59056" 00:18:37.679 }, 00:18:37.679 "auth": { 00:18:37.680 "state": "completed", 00:18:37.680 "digest": "sha256", 00:18:37.680 "dhgroup": "ffdhe3072" 00:18:37.680 } 00:18:37.680 } 00:18:37.680 ]' 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.680 20:32:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.940 20:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:18:38.510 20:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.511 20:32:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.772 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.033 00:18:39.033 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:39.033 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:39.033 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:39.293 { 00:18:39.293 "cntlid": 25, 00:18:39.293 "qid": 0, 00:18:39.293 "state": "enabled", 00:18:39.293 "thread": "nvmf_tgt_poll_group_000", 00:18:39.293 "listen_address": { 00:18:39.293 "trtype": "TCP", 00:18:39.293 "adrfam": "IPv4", 00:18:39.293 "traddr": "10.0.0.2", 00:18:39.293 "trsvcid": "4420" 00:18:39.293 }, 00:18:39.293 "peer_address": { 00:18:39.293 "trtype": "TCP", 00:18:39.293 "adrfam": "IPv4", 00:18:39.293 "traddr": "10.0.0.1", 00:18:39.293 "trsvcid": "59092" 00:18:39.293 }, 00:18:39.293 "auth": { 00:18:39.293 "state": "completed", 00:18:39.293 "digest": "sha256", 00:18:39.293 "dhgroup": "ffdhe4096" 00:18:39.293 } 00:18:39.293 } 00:18:39.293 ]' 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:39.293 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:39.551 20:32:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:40.118 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:40.118 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.378 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:40.637 00:18:40.637 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:40.637 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.637 20:32:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:40.898 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.898 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.898 20:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:40.898 20:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.898 20:32:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:40.898 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:40.898 { 00:18:40.898 "cntlid": 27, 00:18:40.898 "qid": 0, 00:18:40.898 "state": "enabled", 00:18:40.899 "thread": "nvmf_tgt_poll_group_000", 00:18:40.899 "listen_address": { 00:18:40.899 "trtype": "TCP", 00:18:40.899 "adrfam": "IPv4", 00:18:40.899 "traddr": "10.0.0.2", 00:18:40.899 "trsvcid": "4420" 00:18:40.899 }, 00:18:40.899 "peer_address": { 00:18:40.899 "trtype": "TCP", 00:18:40.899 "adrfam": "IPv4", 00:18:40.899 "traddr": "10.0.0.1", 00:18:40.899 "trsvcid": "59110" 00:18:40.899 }, 00:18:40.899 "auth": { 00:18:40.899 "state": "completed", 00:18:40.899 "digest": "sha256", 00:18:40.899 "dhgroup": "ffdhe4096" 00:18:40.899 } 00:18:40.899 } 00:18:40.899 ]' 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.899 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:41.159 20:32:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:42.155 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.155 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:42.416 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:42.416 { 00:18:42.416 "cntlid": 29, 00:18:42.416 "qid": 0, 00:18:42.416 "state": "enabled", 00:18:42.416 "thread": "nvmf_tgt_poll_group_000", 00:18:42.416 "listen_address": { 00:18:42.416 "trtype": "TCP", 00:18:42.416 "adrfam": "IPv4", 00:18:42.416 "traddr": "10.0.0.2", 00:18:42.416 "trsvcid": "4420" 00:18:42.416 }, 00:18:42.416 "peer_address": { 00:18:42.416 "trtype": "TCP", 00:18:42.416 "adrfam": "IPv4", 00:18:42.416 "traddr": "10.0.0.1", 00:18:42.416 "trsvcid": "47728" 00:18:42.416 }, 00:18:42.416 "auth": { 00:18:42.416 "state": "completed", 00:18:42.416 "digest": "sha256", 00:18:42.416 "dhgroup": "ffdhe4096" 00:18:42.416 } 00:18:42.416 } 00:18:42.416 ]' 00:18:42.416 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:42.676 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.676 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:42.676 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:42.676 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:42.676 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.676 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.676 20:32:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.676 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.614 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.614 20:32:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:43.873 00:18:43.873 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:43.873 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.873 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:44.132 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:44.133 { 00:18:44.133 "cntlid": 31, 00:18:44.133 "qid": 0, 00:18:44.133 "state": "enabled", 00:18:44.133 "thread": "nvmf_tgt_poll_group_000", 00:18:44.133 "listen_address": { 00:18:44.133 "trtype": "TCP", 00:18:44.133 "adrfam": "IPv4", 00:18:44.133 "traddr": "10.0.0.2", 00:18:44.133 "trsvcid": "4420" 00:18:44.133 }, 00:18:44.133 "peer_address": { 00:18:44.133 "trtype": "TCP", 00:18:44.133 "adrfam": "IPv4", 00:18:44.133 "traddr": "10.0.0.1", 00:18:44.133 "trsvcid": "47742" 00:18:44.133 }, 00:18:44.133 "auth": { 00:18:44.133 "state": "completed", 00:18:44.133 "digest": "sha256", 00:18:44.133 "dhgroup": "ffdhe4096" 00:18:44.133 } 00:18:44.133 } 00:18:44.133 ]' 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:44.133 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:44.392 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.392 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.392 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.392 20:32:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.330 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.330 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:45.590 00:18:45.591 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:45.591 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:45.591 20:32:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:45.850 { 00:18:45.850 "cntlid": 33, 00:18:45.850 "qid": 0, 00:18:45.850 "state": "enabled", 00:18:45.850 "thread": "nvmf_tgt_poll_group_000", 00:18:45.850 "listen_address": { 00:18:45.850 "trtype": "TCP", 00:18:45.850 "adrfam": "IPv4", 00:18:45.850 "traddr": "10.0.0.2", 00:18:45.850 "trsvcid": "4420" 00:18:45.850 }, 00:18:45.850 "peer_address": { 00:18:45.850 "trtype": "TCP", 00:18:45.850 "adrfam": "IPv4", 00:18:45.850 "traddr": "10.0.0.1", 00:18:45.850 "trsvcid": "47760" 00:18:45.850 }, 00:18:45.850 "auth": { 00:18:45.850 "state": "completed", 00:18:45.850 "digest": "sha256", 00:18:45.850 "dhgroup": "ffdhe6144" 00:18:45.850 } 00:18:45.850 } 00:18:45.850 ]' 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:45.850 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:46.109 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:46.109 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:46.109 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.109 20:32:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.084 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:47.345 00:18:47.345 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:47.345 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:47.345 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:47.606 { 00:18:47.606 "cntlid": 35, 00:18:47.606 "qid": 0, 00:18:47.606 "state": "enabled", 00:18:47.606 "thread": "nvmf_tgt_poll_group_000", 00:18:47.606 "listen_address": { 00:18:47.606 "trtype": "TCP", 00:18:47.606 "adrfam": "IPv4", 00:18:47.606 "traddr": "10.0.0.2", 00:18:47.606 "trsvcid": "4420" 00:18:47.606 }, 00:18:47.606 "peer_address": { 00:18:47.606 "trtype": "TCP", 00:18:47.606 "adrfam": "IPv4", 00:18:47.606 "traddr": "10.0.0.1", 00:18:47.606 "trsvcid": "47770" 00:18:47.606 }, 00:18:47.606 "auth": { 00:18:47.606 "state": "completed", 00:18:47.606 "digest": "sha256", 00:18:47.606 "dhgroup": "ffdhe6144" 00:18:47.606 } 00:18:47.606 } 00:18:47.606 ]' 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.606 20:32:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.866 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:18:48.433 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.433 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:48.433 20:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.433 20:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:48.691 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.692 20:32:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:48.950 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:49.211 { 00:18:49.211 "cntlid": 37, 00:18:49.211 "qid": 0, 00:18:49.211 "state": "enabled", 00:18:49.211 "thread": "nvmf_tgt_poll_group_000", 00:18:49.211 "listen_address": { 00:18:49.211 "trtype": "TCP", 00:18:49.211 "adrfam": "IPv4", 00:18:49.211 "traddr": "10.0.0.2", 00:18:49.211 "trsvcid": "4420" 00:18:49.211 }, 00:18:49.211 "peer_address": { 00:18:49.211 "trtype": "TCP", 00:18:49.211 "adrfam": "IPv4", 00:18:49.211 "traddr": "10.0.0.1", 00:18:49.211 "trsvcid": "47792" 00:18:49.211 }, 00:18:49.211 "auth": { 00:18:49.211 "state": "completed", 00:18:49.211 "digest": "sha256", 00:18:49.211 "dhgroup": "ffdhe6144" 00:18:49.211 } 00:18:49.211 } 00:18:49.211 ]' 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:49.211 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:49.472 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.472 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.472 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.472 20:32:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:50.413 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.413 20:32:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:50.673 00:18:50.673 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:50.673 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:50.673 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:50.932 { 00:18:50.932 "cntlid": 39, 00:18:50.932 "qid": 0, 00:18:50.932 "state": "enabled", 00:18:50.932 "thread": "nvmf_tgt_poll_group_000", 00:18:50.932 "listen_address": { 00:18:50.932 "trtype": "TCP", 00:18:50.932 "adrfam": "IPv4", 00:18:50.932 "traddr": "10.0.0.2", 00:18:50.932 "trsvcid": "4420" 00:18:50.932 }, 00:18:50.932 "peer_address": { 00:18:50.932 "trtype": "TCP", 00:18:50.932 "adrfam": "IPv4", 00:18:50.932 "traddr": "10.0.0.1", 00:18:50.932 "trsvcid": "47832" 00:18:50.932 }, 00:18:50.932 "auth": { 00:18:50.932 "state": "completed", 00:18:50.932 "digest": "sha256", 00:18:50.932 "dhgroup": "ffdhe6144" 00:18:50.932 } 00:18:50.932 } 00:18:50.932 ]' 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.932 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.192 20:32:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.134 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.134 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:52.706 00:18:52.706 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:52.706 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:52.706 20:32:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:52.706 { 00:18:52.706 "cntlid": 41, 00:18:52.706 "qid": 0, 00:18:52.706 "state": "enabled", 00:18:52.706 "thread": "nvmf_tgt_poll_group_000", 00:18:52.706 "listen_address": { 00:18:52.706 "trtype": "TCP", 00:18:52.706 "adrfam": "IPv4", 00:18:52.706 "traddr": "10.0.0.2", 00:18:52.706 "trsvcid": "4420" 00:18:52.706 }, 00:18:52.706 "peer_address": { 00:18:52.706 "trtype": "TCP", 00:18:52.706 "adrfam": "IPv4", 00:18:52.706 "traddr": "10.0.0.1", 00:18:52.706 "trsvcid": "44332" 00:18:52.706 }, 00:18:52.706 "auth": { 00:18:52.706 "state": "completed", 00:18:52.706 "digest": "sha256", 00:18:52.706 "dhgroup": "ffdhe8192" 00:18:52.706 } 00:18:52.706 } 00:18:52.706 ]' 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:52.706 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:52.967 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:52.967 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:52.967 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.967 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.967 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.967 20:32:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:53.909 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:53.909 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.480 00:18:54.480 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:54.480 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:54.480 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:54.741 { 00:18:54.741 "cntlid": 43, 00:18:54.741 "qid": 0, 00:18:54.741 "state": "enabled", 00:18:54.741 "thread": "nvmf_tgt_poll_group_000", 00:18:54.741 "listen_address": { 00:18:54.741 "trtype": "TCP", 00:18:54.741 "adrfam": "IPv4", 00:18:54.741 "traddr": "10.0.0.2", 00:18:54.741 "trsvcid": "4420" 00:18:54.741 }, 00:18:54.741 "peer_address": { 00:18:54.741 "trtype": "TCP", 00:18:54.741 "adrfam": "IPv4", 00:18:54.741 "traddr": "10.0.0.1", 00:18:54.741 "trsvcid": "44354" 00:18:54.741 }, 00:18:54.741 "auth": { 00:18:54.741 "state": "completed", 00:18:54.741 "digest": "sha256", 00:18:54.741 "dhgroup": "ffdhe8192" 00:18:54.741 } 00:18:54.741 } 00:18:54.741 ]' 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:54.741 20:32:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:54.741 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:54.741 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:54.741 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:54.741 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:54.741 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.002 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:55.573 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.573 20:32:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:55.834 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.405 00:18:56.405 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:56.405 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.405 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:56.405 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.405 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.405 20:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:56.405 20:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.665 20:32:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:56.665 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:56.665 { 00:18:56.665 "cntlid": 45, 00:18:56.665 "qid": 0, 00:18:56.665 "state": "enabled", 00:18:56.665 "thread": "nvmf_tgt_poll_group_000", 00:18:56.666 "listen_address": { 00:18:56.666 "trtype": "TCP", 00:18:56.666 "adrfam": "IPv4", 00:18:56.666 "traddr": "10.0.0.2", 00:18:56.666 "trsvcid": "4420" 00:18:56.666 }, 00:18:56.666 "peer_address": { 00:18:56.666 "trtype": "TCP", 00:18:56.666 "adrfam": "IPv4", 00:18:56.666 "traddr": "10.0.0.1", 00:18:56.666 "trsvcid": "44370" 00:18:56.666 }, 00:18:56.666 "auth": { 00:18:56.666 "state": "completed", 00:18:56.666 "digest": "sha256", 00:18:56.666 "dhgroup": "ffdhe8192" 00:18:56.666 } 00:18:56.666 } 00:18:56.666 ]' 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:56.666 20:32:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:56.926 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.496 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.496 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:57.758 20:32:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:58.331 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:58.331 { 00:18:58.331 "cntlid": 47, 00:18:58.331 "qid": 0, 00:18:58.331 "state": "enabled", 00:18:58.331 "thread": "nvmf_tgt_poll_group_000", 00:18:58.331 "listen_address": { 00:18:58.331 "trtype": "TCP", 00:18:58.331 "adrfam": "IPv4", 00:18:58.331 "traddr": "10.0.0.2", 00:18:58.331 "trsvcid": "4420" 00:18:58.331 }, 00:18:58.331 "peer_address": { 00:18:58.331 "trtype": "TCP", 00:18:58.331 "adrfam": "IPv4", 00:18:58.331 "traddr": "10.0.0.1", 00:18:58.331 "trsvcid": "44390" 00:18:58.331 }, 00:18:58.331 "auth": { 00:18:58.331 "state": "completed", 00:18:58.331 "digest": "sha256", 00:18:58.331 "dhgroup": "ffdhe8192" 00:18:58.331 } 00:18:58.331 } 00:18:58.331 ]' 00:18:58.331 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:58.592 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:58.592 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:58.592 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:58.592 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:58.592 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.592 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.592 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.853 20:32:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.425 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.425 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.686 20:32:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.686 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:59.947 { 00:18:59.947 "cntlid": 49, 00:18:59.947 "qid": 0, 00:18:59.947 "state": "enabled", 00:18:59.947 "thread": "nvmf_tgt_poll_group_000", 00:18:59.947 "listen_address": { 00:18:59.947 "trtype": "TCP", 00:18:59.947 "adrfam": "IPv4", 00:18:59.947 "traddr": "10.0.0.2", 00:18:59.947 "trsvcid": "4420" 00:18:59.947 }, 00:18:59.947 "peer_address": { 00:18:59.947 "trtype": "TCP", 00:18:59.947 "adrfam": "IPv4", 00:18:59.947 "traddr": "10.0.0.1", 00:18:59.947 "trsvcid": "44412" 00:18:59.947 }, 00:18:59.947 "auth": { 00:18:59.947 "state": "completed", 00:18:59.947 "digest": "sha384", 00:18:59.947 "dhgroup": "null" 00:18:59.947 } 00:18:59.947 } 00:18:59.947 ]' 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.947 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:00.208 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:00.208 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:00.208 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.208 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.208 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.208 20:32:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.152 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.152 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.413 00:19:01.413 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:01.413 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:01.413 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:01.675 { 00:19:01.675 "cntlid": 51, 00:19:01.675 "qid": 0, 00:19:01.675 "state": "enabled", 00:19:01.675 "thread": "nvmf_tgt_poll_group_000", 00:19:01.675 "listen_address": { 00:19:01.675 "trtype": "TCP", 00:19:01.675 "adrfam": "IPv4", 00:19:01.675 "traddr": "10.0.0.2", 00:19:01.675 "trsvcid": "4420" 00:19:01.675 }, 00:19:01.675 "peer_address": { 00:19:01.675 "trtype": "TCP", 00:19:01.675 "adrfam": "IPv4", 00:19:01.675 "traddr": "10.0.0.1", 00:19:01.675 "trsvcid": "57368" 00:19:01.675 }, 00:19:01.675 "auth": { 00:19:01.675 "state": "completed", 00:19:01.675 "digest": "sha384", 00:19:01.675 "dhgroup": "null" 00:19:01.675 } 00:19:01.675 } 00:19:01.675 ]' 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:01.675 20:32:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:01.675 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.675 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.675 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.936 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:02.507 20:32:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.767 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.028 00:19:03.028 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:03.028 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:03.028 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.289 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.289 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.289 20:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:03.289 20:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.289 20:32:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:03.289 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:03.289 { 00:19:03.289 "cntlid": 53, 00:19:03.289 "qid": 0, 00:19:03.289 "state": "enabled", 00:19:03.289 "thread": "nvmf_tgt_poll_group_000", 00:19:03.289 "listen_address": { 00:19:03.289 "trtype": "TCP", 00:19:03.290 "adrfam": "IPv4", 00:19:03.290 "traddr": "10.0.0.2", 00:19:03.290 "trsvcid": "4420" 00:19:03.290 }, 00:19:03.290 "peer_address": { 00:19:03.290 "trtype": "TCP", 00:19:03.290 "adrfam": "IPv4", 00:19:03.290 "traddr": "10.0.0.1", 00:19:03.290 "trsvcid": "57396" 00:19:03.290 }, 00:19:03.290 "auth": { 00:19:03.290 "state": "completed", 00:19:03.290 "digest": "sha384", 00:19:03.290 "dhgroup": "null" 00:19:03.290 } 00:19:03.290 } 00:19:03.290 ]' 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.290 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.550 20:32:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:04.119 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:04.119 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.379 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:04.639 00:19:04.639 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:04.639 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:04.639 20:32:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.639 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.898 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.898 20:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:04.898 20:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:04.899 { 00:19:04.899 "cntlid": 55, 00:19:04.899 "qid": 0, 00:19:04.899 "state": "enabled", 00:19:04.899 "thread": "nvmf_tgt_poll_group_000", 00:19:04.899 "listen_address": { 00:19:04.899 "trtype": "TCP", 00:19:04.899 "adrfam": "IPv4", 00:19:04.899 "traddr": "10.0.0.2", 00:19:04.899 "trsvcid": "4420" 00:19:04.899 }, 00:19:04.899 "peer_address": { 00:19:04.899 "trtype": "TCP", 00:19:04.899 "adrfam": "IPv4", 00:19:04.899 "traddr": "10.0.0.1", 00:19:04.899 "trsvcid": "57416" 00:19:04.899 }, 00:19:04.899 "auth": { 00:19:04.899 "state": "completed", 00:19:04.899 "digest": "sha384", 00:19:04.899 "dhgroup": "null" 00:19:04.899 } 00:19:04.899 } 00:19:04.899 ]' 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.899 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.159 20:32:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.730 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.730 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.991 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.291 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:06.291 { 00:19:06.291 "cntlid": 57, 00:19:06.291 "qid": 0, 00:19:06.291 "state": "enabled", 00:19:06.291 "thread": "nvmf_tgt_poll_group_000", 00:19:06.291 "listen_address": { 00:19:06.291 "trtype": "TCP", 00:19:06.291 "adrfam": "IPv4", 00:19:06.291 "traddr": "10.0.0.2", 00:19:06.291 "trsvcid": "4420" 00:19:06.291 }, 00:19:06.291 "peer_address": { 00:19:06.291 "trtype": "TCP", 00:19:06.291 "adrfam": "IPv4", 00:19:06.291 "traddr": "10.0.0.1", 00:19:06.291 "trsvcid": "57436" 00:19:06.291 }, 00:19:06.291 "auth": { 00:19:06.291 "state": "completed", 00:19:06.291 "digest": "sha384", 00:19:06.291 "dhgroup": "ffdhe2048" 00:19:06.291 } 00:19:06.291 } 00:19:06.291 ]' 00:19:06.291 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.608 20:32:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.566 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.566 20:32:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.828 00:19:07.828 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:07.828 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:07.828 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.828 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.828 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.828 20:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:07.828 20:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:08.089 { 00:19:08.089 "cntlid": 59, 00:19:08.089 "qid": 0, 00:19:08.089 "state": "enabled", 00:19:08.089 "thread": "nvmf_tgt_poll_group_000", 00:19:08.089 "listen_address": { 00:19:08.089 "trtype": "TCP", 00:19:08.089 "adrfam": "IPv4", 00:19:08.089 "traddr": "10.0.0.2", 00:19:08.089 "trsvcid": "4420" 00:19:08.089 }, 00:19:08.089 "peer_address": { 00:19:08.089 "trtype": "TCP", 00:19:08.089 "adrfam": "IPv4", 00:19:08.089 "traddr": "10.0.0.1", 00:19:08.089 "trsvcid": "57454" 00:19:08.089 }, 00:19:08.089 "auth": { 00:19:08.089 "state": "completed", 00:19:08.089 "digest": "sha384", 00:19:08.089 "dhgroup": "ffdhe2048" 00:19:08.089 } 00:19:08.089 } 00:19:08.089 ]' 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.089 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.351 20:33:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.924 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:08.924 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.186 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.447 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:09.447 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:09.447 { 00:19:09.447 "cntlid": 61, 00:19:09.447 "qid": 0, 00:19:09.447 "state": "enabled", 00:19:09.447 "thread": "nvmf_tgt_poll_group_000", 00:19:09.447 "listen_address": { 00:19:09.447 "trtype": "TCP", 00:19:09.447 "adrfam": "IPv4", 00:19:09.447 "traddr": "10.0.0.2", 00:19:09.448 "trsvcid": "4420" 00:19:09.448 }, 00:19:09.448 "peer_address": { 00:19:09.448 "trtype": "TCP", 00:19:09.448 "adrfam": "IPv4", 00:19:09.448 "traddr": "10.0.0.1", 00:19:09.448 "trsvcid": "57494" 00:19:09.448 }, 00:19:09.448 "auth": { 00:19:09.448 "state": "completed", 00:19:09.448 "digest": "sha384", 00:19:09.448 "dhgroup": "ffdhe2048" 00:19:09.448 } 00:19:09.448 } 00:19:09.448 ]' 00:19:09.448 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:09.709 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.709 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:09.709 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:09.709 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:09.709 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.709 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.709 20:33:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.974 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.544 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:10.805 20:33:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:11.066 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:11.066 { 00:19:11.066 "cntlid": 63, 00:19:11.066 "qid": 0, 00:19:11.066 "state": "enabled", 00:19:11.066 "thread": "nvmf_tgt_poll_group_000", 00:19:11.066 "listen_address": { 00:19:11.066 "trtype": "TCP", 00:19:11.066 "adrfam": "IPv4", 00:19:11.066 "traddr": "10.0.0.2", 00:19:11.066 "trsvcid": "4420" 00:19:11.066 }, 00:19:11.066 "peer_address": { 00:19:11.066 "trtype": "TCP", 00:19:11.066 "adrfam": "IPv4", 00:19:11.066 "traddr": "10.0.0.1", 00:19:11.066 "trsvcid": "52230" 00:19:11.066 }, 00:19:11.066 "auth": { 00:19:11.066 "state": "completed", 00:19:11.066 "digest": "sha384", 00:19:11.066 "dhgroup": "ffdhe2048" 00:19:11.066 } 00:19:11.066 } 00:19:11.066 ]' 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.066 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:11.326 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:11.326 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:11.327 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.327 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.327 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.327 20:33:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:12.270 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.271 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.271 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.531 00:19:12.531 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:12.531 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:12.531 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.792 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.792 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.792 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.792 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.792 20:33:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.792 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:12.792 { 00:19:12.792 "cntlid": 65, 00:19:12.792 "qid": 0, 00:19:12.792 "state": "enabled", 00:19:12.792 "thread": "nvmf_tgt_poll_group_000", 00:19:12.792 "listen_address": { 00:19:12.792 "trtype": "TCP", 00:19:12.792 "adrfam": "IPv4", 00:19:12.792 "traddr": "10.0.0.2", 00:19:12.792 "trsvcid": "4420" 00:19:12.792 }, 00:19:12.792 "peer_address": { 00:19:12.792 "trtype": "TCP", 00:19:12.792 "adrfam": "IPv4", 00:19:12.792 "traddr": "10.0.0.1", 00:19:12.792 "trsvcid": "52252" 00:19:12.792 }, 00:19:12.792 "auth": { 00:19:12.792 "state": "completed", 00:19:12.792 "digest": "sha384", 00:19:12.792 "dhgroup": "ffdhe3072" 00:19:12.792 } 00:19:12.792 } 00:19:12.792 ]' 00:19:12.792 20:33:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:12.792 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.792 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:12.792 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:12.792 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:12.792 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.792 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.792 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.054 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.628 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:13.628 20:33:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.889 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.150 00:19:14.150 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:14.150 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:14.150 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:14.411 { 00:19:14.411 "cntlid": 67, 00:19:14.411 "qid": 0, 00:19:14.411 "state": "enabled", 00:19:14.411 "thread": "nvmf_tgt_poll_group_000", 00:19:14.411 "listen_address": { 00:19:14.411 "trtype": "TCP", 00:19:14.411 "adrfam": "IPv4", 00:19:14.411 "traddr": "10.0.0.2", 00:19:14.411 "trsvcid": "4420" 00:19:14.411 }, 00:19:14.411 "peer_address": { 00:19:14.411 "trtype": "TCP", 00:19:14.411 "adrfam": "IPv4", 00:19:14.411 "traddr": "10.0.0.1", 00:19:14.411 "trsvcid": "52286" 00:19:14.411 }, 00:19:14.411 "auth": { 00:19:14.411 "state": "completed", 00:19:14.411 "digest": "sha384", 00:19:14.411 "dhgroup": "ffdhe3072" 00:19:14.411 } 00:19:14.411 } 00:19:14.411 ]' 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.411 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.672 20:33:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:15.245 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.506 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.768 00:19:15.768 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:15.768 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.768 20:33:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:15.768 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.768 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.768 20:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:15.768 20:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:16.029 { 00:19:16.029 "cntlid": 69, 00:19:16.029 "qid": 0, 00:19:16.029 "state": "enabled", 00:19:16.029 "thread": "nvmf_tgt_poll_group_000", 00:19:16.029 "listen_address": { 00:19:16.029 "trtype": "TCP", 00:19:16.029 "adrfam": "IPv4", 00:19:16.029 "traddr": "10.0.0.2", 00:19:16.029 "trsvcid": "4420" 00:19:16.029 }, 00:19:16.029 "peer_address": { 00:19:16.029 "trtype": "TCP", 00:19:16.029 "adrfam": "IPv4", 00:19:16.029 "traddr": "10.0.0.1", 00:19:16.029 "trsvcid": "52318" 00:19:16.029 }, 00:19:16.029 "auth": { 00:19:16.029 "state": "completed", 00:19:16.029 "digest": "sha384", 00:19:16.029 "dhgroup": "ffdhe3072" 00:19:16.029 } 00:19:16.029 } 00:19:16.029 ]' 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.029 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.291 20:33:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:16.864 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.125 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:17.387 00:19:17.387 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:17.387 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:17.387 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.387 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.387 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.387 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:17.387 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:17.647 { 00:19:17.647 "cntlid": 71, 00:19:17.647 "qid": 0, 00:19:17.647 "state": "enabled", 00:19:17.647 "thread": "nvmf_tgt_poll_group_000", 00:19:17.647 "listen_address": { 00:19:17.647 "trtype": "TCP", 00:19:17.647 "adrfam": "IPv4", 00:19:17.647 "traddr": "10.0.0.2", 00:19:17.647 "trsvcid": "4420" 00:19:17.647 }, 00:19:17.647 "peer_address": { 00:19:17.647 "trtype": "TCP", 00:19:17.647 "adrfam": "IPv4", 00:19:17.647 "traddr": "10.0.0.1", 00:19:17.647 "trsvcid": "52334" 00:19:17.647 }, 00:19:17.647 "auth": { 00:19:17.647 "state": "completed", 00:19:17.647 "digest": "sha384", 00:19:17.647 "dhgroup": "ffdhe3072" 00:19:17.647 } 00:19:17.647 } 00:19:17.647 ]' 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.647 20:33:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:17.908 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.480 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.741 20:33:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.002 00:19:19.002 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:19.002 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.002 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:19.262 { 00:19:19.262 "cntlid": 73, 00:19:19.262 "qid": 0, 00:19:19.262 "state": "enabled", 00:19:19.262 "thread": "nvmf_tgt_poll_group_000", 00:19:19.262 "listen_address": { 00:19:19.262 "trtype": "TCP", 00:19:19.262 "adrfam": "IPv4", 00:19:19.262 "traddr": "10.0.0.2", 00:19:19.262 "trsvcid": "4420" 00:19:19.262 }, 00:19:19.262 "peer_address": { 00:19:19.262 "trtype": "TCP", 00:19:19.262 "adrfam": "IPv4", 00:19:19.262 "traddr": "10.0.0.1", 00:19:19.262 "trsvcid": "52362" 00:19:19.262 }, 00:19:19.262 "auth": { 00:19:19.262 "state": "completed", 00:19:19.262 "digest": "sha384", 00:19:19.262 "dhgroup": "ffdhe4096" 00:19:19.262 } 00:19:19.262 } 00:19:19.262 ]' 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.262 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.522 20:33:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:20.092 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.353 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.613 00:19:20.613 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:20.613 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:20.613 20:33:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:20.874 { 00:19:20.874 "cntlid": 75, 00:19:20.874 "qid": 0, 00:19:20.874 "state": "enabled", 00:19:20.874 "thread": "nvmf_tgt_poll_group_000", 00:19:20.874 "listen_address": { 00:19:20.874 "trtype": "TCP", 00:19:20.874 "adrfam": "IPv4", 00:19:20.874 "traddr": "10.0.0.2", 00:19:20.874 "trsvcid": "4420" 00:19:20.874 }, 00:19:20.874 "peer_address": { 00:19:20.874 "trtype": "TCP", 00:19:20.874 "adrfam": "IPv4", 00:19:20.874 "traddr": "10.0.0.1", 00:19:20.874 "trsvcid": "52382" 00:19:20.874 }, 00:19:20.874 "auth": { 00:19:20.874 "state": "completed", 00:19:20.874 "digest": "sha384", 00:19:20.874 "dhgroup": "ffdhe4096" 00:19:20.874 } 00:19:20.874 } 00:19:20.874 ]' 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.874 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.133 20:33:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.702 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.962 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.222 00:19:22.222 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.222 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.222 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.483 { 00:19:22.483 "cntlid": 77, 00:19:22.483 "qid": 0, 00:19:22.483 "state": "enabled", 00:19:22.483 "thread": "nvmf_tgt_poll_group_000", 00:19:22.483 "listen_address": { 00:19:22.483 "trtype": "TCP", 00:19:22.483 "adrfam": "IPv4", 00:19:22.483 "traddr": "10.0.0.2", 00:19:22.483 "trsvcid": "4420" 00:19:22.483 }, 00:19:22.483 "peer_address": { 00:19:22.483 "trtype": "TCP", 00:19:22.483 "adrfam": "IPv4", 00:19:22.483 "traddr": "10.0.0.1", 00:19:22.483 "trsvcid": "47114" 00:19:22.483 }, 00:19:22.483 "auth": { 00:19:22.483 "state": "completed", 00:19:22.483 "digest": "sha384", 00:19:22.483 "dhgroup": "ffdhe4096" 00:19:22.483 } 00:19:22.483 } 00:19:22.483 ]' 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.483 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.743 20:33:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.682 20:33:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:23.942 00:19:23.942 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:23.942 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:23.942 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.201 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.201 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.201 20:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.201 20:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.201 20:33:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.201 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.201 { 00:19:24.201 "cntlid": 79, 00:19:24.201 "qid": 0, 00:19:24.201 "state": "enabled", 00:19:24.202 "thread": "nvmf_tgt_poll_group_000", 00:19:24.202 "listen_address": { 00:19:24.202 "trtype": "TCP", 00:19:24.202 "adrfam": "IPv4", 00:19:24.202 "traddr": "10.0.0.2", 00:19:24.202 "trsvcid": "4420" 00:19:24.202 }, 00:19:24.202 "peer_address": { 00:19:24.202 "trtype": "TCP", 00:19:24.202 "adrfam": "IPv4", 00:19:24.202 "traddr": "10.0.0.1", 00:19:24.202 "trsvcid": "47134" 00:19:24.202 }, 00:19:24.202 "auth": { 00:19:24.202 "state": "completed", 00:19:24.202 "digest": "sha384", 00:19:24.202 "dhgroup": "ffdhe4096" 00:19:24.202 } 00:19:24.202 } 00:19:24.202 ]' 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.202 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:24.461 20:33:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.030 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:25.030 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.290 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.550 00:19:25.550 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:25.550 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:25.550 20:33:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:25.812 { 00:19:25.812 "cntlid": 81, 00:19:25.812 "qid": 0, 00:19:25.812 "state": "enabled", 00:19:25.812 "thread": "nvmf_tgt_poll_group_000", 00:19:25.812 "listen_address": { 00:19:25.812 "trtype": "TCP", 00:19:25.812 "adrfam": "IPv4", 00:19:25.812 "traddr": "10.0.0.2", 00:19:25.812 "trsvcid": "4420" 00:19:25.812 }, 00:19:25.812 "peer_address": { 00:19:25.812 "trtype": "TCP", 00:19:25.812 "adrfam": "IPv4", 00:19:25.812 "traddr": "10.0.0.1", 00:19:25.812 "trsvcid": "47160" 00:19:25.812 }, 00:19:25.812 "auth": { 00:19:25.812 "state": "completed", 00:19:25.812 "digest": "sha384", 00:19:25.812 "dhgroup": "ffdhe6144" 00:19:25.812 } 00:19:25.812 } 00:19:25.812 ]' 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:25.812 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:26.073 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.073 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.073 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.073 20:33:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.015 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.276 00:19:27.276 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:27.276 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.276 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.537 { 00:19:27.537 "cntlid": 83, 00:19:27.537 "qid": 0, 00:19:27.537 "state": "enabled", 00:19:27.537 "thread": "nvmf_tgt_poll_group_000", 00:19:27.537 "listen_address": { 00:19:27.537 "trtype": "TCP", 00:19:27.537 "adrfam": "IPv4", 00:19:27.537 "traddr": "10.0.0.2", 00:19:27.537 "trsvcid": "4420" 00:19:27.537 }, 00:19:27.537 "peer_address": { 00:19:27.537 "trtype": "TCP", 00:19:27.537 "adrfam": "IPv4", 00:19:27.537 "traddr": "10.0.0.1", 00:19:27.537 "trsvcid": "47186" 00:19:27.537 }, 00:19:27.537 "auth": { 00:19:27.537 "state": "completed", 00:19:27.537 "digest": "sha384", 00:19:27.537 "dhgroup": "ffdhe6144" 00:19:27.537 } 00:19:27.537 } 00:19:27.537 ]' 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.537 20:33:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.798 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.740 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:28.740 20:33:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.001 00:19:29.001 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.001 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.001 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.262 { 00:19:29.262 "cntlid": 85, 00:19:29.262 "qid": 0, 00:19:29.262 "state": "enabled", 00:19:29.262 "thread": "nvmf_tgt_poll_group_000", 00:19:29.262 "listen_address": { 00:19:29.262 "trtype": "TCP", 00:19:29.262 "adrfam": "IPv4", 00:19:29.262 "traddr": "10.0.0.2", 00:19:29.262 "trsvcid": "4420" 00:19:29.262 }, 00:19:29.262 "peer_address": { 00:19:29.262 "trtype": "TCP", 00:19:29.262 "adrfam": "IPv4", 00:19:29.262 "traddr": "10.0.0.1", 00:19:29.262 "trsvcid": "47210" 00:19:29.262 }, 00:19:29.262 "auth": { 00:19:29.262 "state": "completed", 00:19:29.262 "digest": "sha384", 00:19:29.262 "dhgroup": "ffdhe6144" 00:19:29.262 } 00:19:29.262 } 00:19:29.262 ]' 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.262 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.524 20:33:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:30.470 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.471 20:33:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:30.731 00:19:30.731 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:30.731 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:30.732 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:30.997 { 00:19:30.997 "cntlid": 87, 00:19:30.997 "qid": 0, 00:19:30.997 "state": "enabled", 00:19:30.997 "thread": "nvmf_tgt_poll_group_000", 00:19:30.997 "listen_address": { 00:19:30.997 "trtype": "TCP", 00:19:30.997 "adrfam": "IPv4", 00:19:30.997 "traddr": "10.0.0.2", 00:19:30.997 "trsvcid": "4420" 00:19:30.997 }, 00:19:30.997 "peer_address": { 00:19:30.997 "trtype": "TCP", 00:19:30.997 "adrfam": "IPv4", 00:19:30.997 "traddr": "10.0.0.1", 00:19:30.997 "trsvcid": "47240" 00:19:30.997 }, 00:19:30.997 "auth": { 00:19:30.997 "state": "completed", 00:19:30.997 "digest": "sha384", 00:19:30.997 "dhgroup": "ffdhe6144" 00:19:30.997 } 00:19:30.997 } 00:19:30.997 ]' 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:30.997 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.265 20:33:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:32.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.209 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.781 00:19:32.781 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:32.781 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.781 20:33:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:32.781 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.781 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.781 20:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:32.781 20:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.781 20:33:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:32.781 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:32.781 { 00:19:32.781 "cntlid": 89, 00:19:32.781 "qid": 0, 00:19:32.781 "state": "enabled", 00:19:32.781 "thread": "nvmf_tgt_poll_group_000", 00:19:32.781 "listen_address": { 00:19:32.781 "trtype": "TCP", 00:19:32.781 "adrfam": "IPv4", 00:19:32.781 "traddr": "10.0.0.2", 00:19:32.781 "trsvcid": "4420" 00:19:32.781 }, 00:19:32.781 "peer_address": { 00:19:32.781 "trtype": "TCP", 00:19:32.781 "adrfam": "IPv4", 00:19:32.781 "traddr": "10.0.0.1", 00:19:32.781 "trsvcid": "43020" 00:19:32.781 }, 00:19:32.781 "auth": { 00:19:32.781 "state": "completed", 00:19:32.781 "digest": "sha384", 00:19:32.781 "dhgroup": "ffdhe8192" 00:19:32.781 } 00:19:32.781 } 00:19:32.781 ]' 00:19:32.781 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:33.041 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:33.042 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:33.042 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:33.042 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:33.042 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:33.042 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:33.042 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.303 20:33:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.874 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:33.874 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.135 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:34.806 00:19:34.806 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:34.806 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:34.806 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.806 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.807 20:33:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.807 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:34.807 20:33:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:34.807 { 00:19:34.807 "cntlid": 91, 00:19:34.807 "qid": 0, 00:19:34.807 "state": "enabled", 00:19:34.807 "thread": "nvmf_tgt_poll_group_000", 00:19:34.807 "listen_address": { 00:19:34.807 "trtype": "TCP", 00:19:34.807 "adrfam": "IPv4", 00:19:34.807 "traddr": "10.0.0.2", 00:19:34.807 "trsvcid": "4420" 00:19:34.807 }, 00:19:34.807 "peer_address": { 00:19:34.807 "trtype": "TCP", 00:19:34.807 "adrfam": "IPv4", 00:19:34.807 "traddr": "10.0.0.1", 00:19:34.807 "trsvcid": "43046" 00:19:34.807 }, 00:19:34.807 "auth": { 00:19:34.807 "state": "completed", 00:19:34.807 "digest": "sha384", 00:19:34.807 "dhgroup": "ffdhe8192" 00:19:34.807 } 00:19:34.807 } 00:19:34.807 ]' 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.807 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:35.078 20:33:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:35.649 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.649 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.649 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:35.649 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.649 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.909 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:36.479 00:19:36.479 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:36.479 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:36.479 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.739 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.740 { 00:19:36.740 "cntlid": 93, 00:19:36.740 "qid": 0, 00:19:36.740 "state": "enabled", 00:19:36.740 "thread": "nvmf_tgt_poll_group_000", 00:19:36.740 "listen_address": { 00:19:36.740 "trtype": "TCP", 00:19:36.740 "adrfam": "IPv4", 00:19:36.740 "traddr": "10.0.0.2", 00:19:36.740 "trsvcid": "4420" 00:19:36.740 }, 00:19:36.740 "peer_address": { 00:19:36.740 "trtype": "TCP", 00:19:36.740 "adrfam": "IPv4", 00:19:36.740 "traddr": "10.0.0.1", 00:19:36.740 "trsvcid": "43072" 00:19:36.740 }, 00:19:36.740 "auth": { 00:19:36.740 "state": "completed", 00:19:36.740 "digest": "sha384", 00:19:36.740 "dhgroup": "ffdhe8192" 00:19:36.740 } 00:19:36.740 } 00:19:36.740 ]' 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:36.740 20:33:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.740 20:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.740 20:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.740 20:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.000 20:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:37.941 20:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.941 20:33:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:37.941 20:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.941 20:33:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.941 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.512 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.512 { 00:19:38.512 "cntlid": 95, 00:19:38.512 "qid": 0, 00:19:38.512 "state": "enabled", 00:19:38.512 "thread": "nvmf_tgt_poll_group_000", 00:19:38.512 "listen_address": { 00:19:38.512 "trtype": "TCP", 00:19:38.512 "adrfam": "IPv4", 00:19:38.512 "traddr": "10.0.0.2", 00:19:38.512 "trsvcid": "4420" 00:19:38.512 }, 00:19:38.512 "peer_address": { 00:19:38.512 "trtype": "TCP", 00:19:38.512 "adrfam": "IPv4", 00:19:38.512 "traddr": "10.0.0.1", 00:19:38.512 "trsvcid": "43086" 00:19:38.512 }, 00:19:38.512 "auth": { 00:19:38.512 "state": "completed", 00:19:38.512 "digest": "sha384", 00:19:38.512 "dhgroup": "ffdhe8192" 00:19:38.512 } 00:19:38.512 } 00:19:38.512 ]' 00:19:38.512 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.774 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:38.774 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.774 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:38.774 20:33:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.774 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.774 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.774 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.035 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:39.608 20:33:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.870 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.132 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.132 { 00:19:40.132 "cntlid": 97, 00:19:40.132 "qid": 0, 00:19:40.132 "state": "enabled", 00:19:40.132 "thread": "nvmf_tgt_poll_group_000", 00:19:40.132 "listen_address": { 00:19:40.132 "trtype": "TCP", 00:19:40.132 "adrfam": "IPv4", 00:19:40.132 "traddr": "10.0.0.2", 00:19:40.132 "trsvcid": "4420" 00:19:40.132 }, 00:19:40.132 "peer_address": { 00:19:40.132 "trtype": "TCP", 00:19:40.132 "adrfam": "IPv4", 00:19:40.132 "traddr": "10.0.0.1", 00:19:40.132 "trsvcid": "43118" 00:19:40.132 }, 00:19:40.132 "auth": { 00:19:40.132 "state": "completed", 00:19:40.132 "digest": "sha512", 00:19:40.132 "dhgroup": "null" 00:19:40.132 } 00:19:40.132 } 00:19:40.132 ]' 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.132 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.394 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:40.394 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.394 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.394 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.394 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.394 20:33:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.341 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.602 00:19:41.602 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:41.602 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:41.602 20:33:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:41.863 { 00:19:41.863 "cntlid": 99, 00:19:41.863 "qid": 0, 00:19:41.863 "state": "enabled", 00:19:41.863 "thread": "nvmf_tgt_poll_group_000", 00:19:41.863 "listen_address": { 00:19:41.863 "trtype": "TCP", 00:19:41.863 "adrfam": "IPv4", 00:19:41.863 "traddr": "10.0.0.2", 00:19:41.863 "trsvcid": "4420" 00:19:41.863 }, 00:19:41.863 "peer_address": { 00:19:41.863 "trtype": "TCP", 00:19:41.863 "adrfam": "IPv4", 00:19:41.863 "traddr": "10.0.0.1", 00:19:41.863 "trsvcid": "40580" 00:19:41.863 }, 00:19:41.863 "auth": { 00:19:41.863 "state": "completed", 00:19:41.863 "digest": "sha512", 00:19:41.863 "dhgroup": "null" 00:19:41.863 } 00:19:41.863 } 00:19:41.863 ]' 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:41.863 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.123 20:33:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:42.692 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:42.692 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.952 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:43.212 00:19:43.212 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:43.212 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:43.212 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:43.471 { 00:19:43.471 "cntlid": 101, 00:19:43.471 "qid": 0, 00:19:43.471 "state": "enabled", 00:19:43.471 "thread": "nvmf_tgt_poll_group_000", 00:19:43.471 "listen_address": { 00:19:43.471 "trtype": "TCP", 00:19:43.471 "adrfam": "IPv4", 00:19:43.471 "traddr": "10.0.0.2", 00:19:43.471 "trsvcid": "4420" 00:19:43.471 }, 00:19:43.471 "peer_address": { 00:19:43.471 "trtype": "TCP", 00:19:43.471 "adrfam": "IPv4", 00:19:43.471 "traddr": "10.0.0.1", 00:19:43.471 "trsvcid": "40602" 00:19:43.471 }, 00:19:43.471 "auth": { 00:19:43.471 "state": "completed", 00:19:43.471 "digest": "sha512", 00:19:43.471 "dhgroup": "null" 00:19:43.471 } 00:19:43.471 } 00:19:43.471 ]' 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.471 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.730 20:33:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.298 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:44.298 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.559 20:33:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:44.818 00:19:44.818 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.818 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.818 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.818 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.818 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.818 20:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.818 20:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:45.078 { 00:19:45.078 "cntlid": 103, 00:19:45.078 "qid": 0, 00:19:45.078 "state": "enabled", 00:19:45.078 "thread": "nvmf_tgt_poll_group_000", 00:19:45.078 "listen_address": { 00:19:45.078 "trtype": "TCP", 00:19:45.078 "adrfam": "IPv4", 00:19:45.078 "traddr": "10.0.0.2", 00:19:45.078 "trsvcid": "4420" 00:19:45.078 }, 00:19:45.078 "peer_address": { 00:19:45.078 "trtype": "TCP", 00:19:45.078 "adrfam": "IPv4", 00:19:45.078 "traddr": "10.0.0.1", 00:19:45.078 "trsvcid": "40624" 00:19:45.078 }, 00:19:45.078 "auth": { 00:19:45.078 "state": "completed", 00:19:45.078 "digest": "sha512", 00:19:45.078 "dhgroup": "null" 00:19:45.078 } 00:19:45.078 } 00:19:45.078 ]' 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.078 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.337 20:33:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:45.905 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:45.905 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.165 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.426 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:46.426 { 00:19:46.426 "cntlid": 105, 00:19:46.426 "qid": 0, 00:19:46.426 "state": "enabled", 00:19:46.426 "thread": "nvmf_tgt_poll_group_000", 00:19:46.426 "listen_address": { 00:19:46.426 "trtype": "TCP", 00:19:46.426 "adrfam": "IPv4", 00:19:46.426 "traddr": "10.0.0.2", 00:19:46.426 "trsvcid": "4420" 00:19:46.426 }, 00:19:46.426 "peer_address": { 00:19:46.426 "trtype": "TCP", 00:19:46.426 "adrfam": "IPv4", 00:19:46.426 "traddr": "10.0.0.1", 00:19:46.426 "trsvcid": "40668" 00:19:46.426 }, 00:19:46.426 "auth": { 00:19:46.426 "state": "completed", 00:19:46.426 "digest": "sha512", 00:19:46.426 "dhgroup": "ffdhe2048" 00:19:46.426 } 00:19:46.426 } 00:19:46.426 ]' 00:19:46.426 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:46.687 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:46.687 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:46.687 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:46.687 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:46.687 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:46.687 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:46.687 20:33:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:46.947 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:47.517 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:47.779 20:33:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:48.040 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:48.040 { 00:19:48.040 "cntlid": 107, 00:19:48.040 "qid": 0, 00:19:48.040 "state": "enabled", 00:19:48.040 "thread": "nvmf_tgt_poll_group_000", 00:19:48.040 "listen_address": { 00:19:48.040 "trtype": "TCP", 00:19:48.040 "adrfam": "IPv4", 00:19:48.040 "traddr": "10.0.0.2", 00:19:48.040 "trsvcid": "4420" 00:19:48.040 }, 00:19:48.040 "peer_address": { 00:19:48.040 "trtype": "TCP", 00:19:48.040 "adrfam": "IPv4", 00:19:48.040 "traddr": "10.0.0.1", 00:19:48.040 "trsvcid": "40684" 00:19:48.040 }, 00:19:48.040 "auth": { 00:19:48.040 "state": "completed", 00:19:48.040 "digest": "sha512", 00:19:48.040 "dhgroup": "ffdhe2048" 00:19:48.040 } 00:19:48.040 } 00:19:48.040 ]' 00:19:48.040 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:48.302 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.302 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:48.302 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:48.302 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:48.302 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.302 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.302 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.564 20:33:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.136 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.398 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:49.398 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.659 { 00:19:49.659 "cntlid": 109, 00:19:49.659 "qid": 0, 00:19:49.659 "state": "enabled", 00:19:49.659 "thread": "nvmf_tgt_poll_group_000", 00:19:49.659 "listen_address": { 00:19:49.659 "trtype": "TCP", 00:19:49.659 "adrfam": "IPv4", 00:19:49.659 "traddr": "10.0.0.2", 00:19:49.659 "trsvcid": "4420" 00:19:49.659 }, 00:19:49.659 "peer_address": { 00:19:49.659 "trtype": "TCP", 00:19:49.659 "adrfam": "IPv4", 00:19:49.659 "traddr": "10.0.0.1", 00:19:49.659 "trsvcid": "40718" 00:19:49.659 }, 00:19:49.659 "auth": { 00:19:49.659 "state": "completed", 00:19:49.659 "digest": "sha512", 00:19:49.659 "dhgroup": "ffdhe2048" 00:19:49.659 } 00:19:49.659 } 00:19:49.659 ]' 00:19:49.659 20:33:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.660 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.660 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.921 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:49.921 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.921 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.921 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.921 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.921 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.865 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.865 20:33:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:50.865 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:51.125 00:19:51.125 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.125 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.125 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.385 { 00:19:51.385 "cntlid": 111, 00:19:51.385 "qid": 0, 00:19:51.385 "state": "enabled", 00:19:51.385 "thread": "nvmf_tgt_poll_group_000", 00:19:51.385 "listen_address": { 00:19:51.385 "trtype": "TCP", 00:19:51.385 "adrfam": "IPv4", 00:19:51.385 "traddr": "10.0.0.2", 00:19:51.385 "trsvcid": "4420" 00:19:51.385 }, 00:19:51.385 "peer_address": { 00:19:51.385 "trtype": "TCP", 00:19:51.385 "adrfam": "IPv4", 00:19:51.385 "traddr": "10.0.0.1", 00:19:51.385 "trsvcid": "52982" 00:19:51.385 }, 00:19:51.385 "auth": { 00:19:51.385 "state": "completed", 00:19:51.385 "digest": "sha512", 00:19:51.385 "dhgroup": "ffdhe2048" 00:19:51.385 } 00:19:51.385 } 00:19:51.385 ]' 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.385 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:51.645 20:33:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.217 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.477 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:52.737 00:19:52.737 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:52.737 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:52.737 20:33:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:52.998 { 00:19:52.998 "cntlid": 113, 00:19:52.998 "qid": 0, 00:19:52.998 "state": "enabled", 00:19:52.998 "thread": "nvmf_tgt_poll_group_000", 00:19:52.998 "listen_address": { 00:19:52.998 "trtype": "TCP", 00:19:52.998 "adrfam": "IPv4", 00:19:52.998 "traddr": "10.0.0.2", 00:19:52.998 "trsvcid": "4420" 00:19:52.998 }, 00:19:52.998 "peer_address": { 00:19:52.998 "trtype": "TCP", 00:19:52.998 "adrfam": "IPv4", 00:19:52.998 "traddr": "10.0.0.1", 00:19:52.998 "trsvcid": "53012" 00:19:52.998 }, 00:19:52.998 "auth": { 00:19:52.998 "state": "completed", 00:19:52.998 "digest": "sha512", 00:19:52.998 "dhgroup": "ffdhe3072" 00:19:52.998 } 00:19:52.998 } 00:19:52.998 ]' 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.998 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.259 20:33:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.831 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:53.831 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.093 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:54.353 00:19:54.354 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:54.354 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:54.354 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.354 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.354 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.354 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.354 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.615 20:33:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.615 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.615 { 00:19:54.615 "cntlid": 115, 00:19:54.615 "qid": 0, 00:19:54.615 "state": "enabled", 00:19:54.615 "thread": "nvmf_tgt_poll_group_000", 00:19:54.615 "listen_address": { 00:19:54.615 "trtype": "TCP", 00:19:54.615 "adrfam": "IPv4", 00:19:54.615 "traddr": "10.0.0.2", 00:19:54.615 "trsvcid": "4420" 00:19:54.615 }, 00:19:54.615 "peer_address": { 00:19:54.615 "trtype": "TCP", 00:19:54.615 "adrfam": "IPv4", 00:19:54.615 "traddr": "10.0.0.1", 00:19:54.615 "trsvcid": "53038" 00:19:54.615 }, 00:19:54.615 "auth": { 00:19:54.615 "state": "completed", 00:19:54.615 "digest": "sha512", 00:19:54.615 "dhgroup": "ffdhe3072" 00:19:54.615 } 00:19:54.615 } 00:19:54.615 ]' 00:19:54.615 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.616 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.616 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.616 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:54.616 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.616 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.616 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.616 20:33:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.877 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.448 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:55.448 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.708 20:33:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:55.968 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:55.968 { 00:19:55.968 "cntlid": 117, 00:19:55.968 "qid": 0, 00:19:55.968 "state": "enabled", 00:19:55.968 "thread": "nvmf_tgt_poll_group_000", 00:19:55.968 "listen_address": { 00:19:55.968 "trtype": "TCP", 00:19:55.968 "adrfam": "IPv4", 00:19:55.968 "traddr": "10.0.0.2", 00:19:55.968 "trsvcid": "4420" 00:19:55.968 }, 00:19:55.968 "peer_address": { 00:19:55.968 "trtype": "TCP", 00:19:55.968 "adrfam": "IPv4", 00:19:55.968 "traddr": "10.0.0.1", 00:19:55.968 "trsvcid": "53070" 00:19:55.968 }, 00:19:55.968 "auth": { 00:19:55.968 "state": "completed", 00:19:55.968 "digest": "sha512", 00:19:55.968 "dhgroup": "ffdhe3072" 00:19:55.968 } 00:19:55.968 } 00:19:55.968 ]' 00:19:55.968 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.228 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.228 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.228 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:56.228 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.228 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.228 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.228 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.489 20:33:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:19:57.061 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.062 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:57.062 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.062 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.062 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.062 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.062 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.062 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:19:57.322 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.323 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.323 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.323 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.323 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:57.583 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:57.583 { 00:19:57.583 "cntlid": 119, 00:19:57.583 "qid": 0, 00:19:57.583 "state": "enabled", 00:19:57.583 "thread": "nvmf_tgt_poll_group_000", 00:19:57.583 "listen_address": { 00:19:57.583 "trtype": "TCP", 00:19:57.583 "adrfam": "IPv4", 00:19:57.583 "traddr": "10.0.0.2", 00:19:57.583 "trsvcid": "4420" 00:19:57.583 }, 00:19:57.583 "peer_address": { 00:19:57.583 "trtype": "TCP", 00:19:57.583 "adrfam": "IPv4", 00:19:57.583 "traddr": "10.0.0.1", 00:19:57.583 "trsvcid": "53098" 00:19:57.583 }, 00:19:57.583 "auth": { 00:19:57.583 "state": "completed", 00:19:57.583 "digest": "sha512", 00:19:57.583 "dhgroup": "ffdhe3072" 00:19:57.583 } 00:19:57.583 } 00:19:57.583 ]' 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:57.583 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:57.844 20:33:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:57.844 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:57.844 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:57.844 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:57.844 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.844 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:58.786 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:58.786 20:33:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.786 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:59.047 00:19:59.047 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:59.047 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:59.047 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:59.308 { 00:19:59.308 "cntlid": 121, 00:19:59.308 "qid": 0, 00:19:59.308 "state": "enabled", 00:19:59.308 "thread": "nvmf_tgt_poll_group_000", 00:19:59.308 "listen_address": { 00:19:59.308 "trtype": "TCP", 00:19:59.308 "adrfam": "IPv4", 00:19:59.308 "traddr": "10.0.0.2", 00:19:59.308 "trsvcid": "4420" 00:19:59.308 }, 00:19:59.308 "peer_address": { 00:19:59.308 "trtype": "TCP", 00:19:59.308 "adrfam": "IPv4", 00:19:59.308 "traddr": "10.0.0.1", 00:19:59.308 "trsvcid": "53132" 00:19:59.308 }, 00:19:59.308 "auth": { 00:19:59.308 "state": "completed", 00:19:59.308 "digest": "sha512", 00:19:59.308 "dhgroup": "ffdhe4096" 00:19:59.308 } 00:19:59.308 } 00:19:59.308 ]' 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:59.308 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.569 20:33:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:20:00.147 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.147 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:00.147 20:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.147 20:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.408 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.668 00:20:00.668 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.668 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.668 20:33:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.928 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.928 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.928 20:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.928 20:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.928 20:33:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.928 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:00.928 { 00:20:00.928 "cntlid": 123, 00:20:00.928 "qid": 0, 00:20:00.929 "state": "enabled", 00:20:00.929 "thread": "nvmf_tgt_poll_group_000", 00:20:00.929 "listen_address": { 00:20:00.929 "trtype": "TCP", 00:20:00.929 "adrfam": "IPv4", 00:20:00.929 "traddr": "10.0.0.2", 00:20:00.929 "trsvcid": "4420" 00:20:00.929 }, 00:20:00.929 "peer_address": { 00:20:00.929 "trtype": "TCP", 00:20:00.929 "adrfam": "IPv4", 00:20:00.929 "traddr": "10.0.0.1", 00:20:00.929 "trsvcid": "53158" 00:20:00.929 }, 00:20:00.929 "auth": { 00:20:00.929 "state": "completed", 00:20:00.929 "digest": "sha512", 00:20:00.929 "dhgroup": "ffdhe4096" 00:20:00.929 } 00:20:00.929 } 00:20:00.929 ]' 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.929 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.189 20:33:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.129 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.129 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.130 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.130 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.390 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:02.390 { 00:20:02.390 "cntlid": 125, 00:20:02.390 "qid": 0, 00:20:02.390 "state": "enabled", 00:20:02.390 "thread": "nvmf_tgt_poll_group_000", 00:20:02.390 "listen_address": { 00:20:02.390 "trtype": "TCP", 00:20:02.390 "adrfam": "IPv4", 00:20:02.390 "traddr": "10.0.0.2", 00:20:02.390 "trsvcid": "4420" 00:20:02.390 }, 00:20:02.390 "peer_address": { 00:20:02.390 "trtype": "TCP", 00:20:02.390 "adrfam": "IPv4", 00:20:02.390 "traddr": "10.0.0.1", 00:20:02.390 "trsvcid": "48536" 00:20:02.390 }, 00:20:02.390 "auth": { 00:20:02.390 "state": "completed", 00:20:02.390 "digest": "sha512", 00:20:02.390 "dhgroup": "ffdhe4096" 00:20:02.390 } 00:20:02.390 } 00:20:02.390 ]' 00:20:02.390 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:02.649 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.649 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:02.649 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:02.649 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:02.649 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.649 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.649 20:33:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.910 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.478 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.478 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:03.740 20:33:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:04.000 00:20:04.000 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:04.000 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:04.000 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:04.000 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.001 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.001 20:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:04.001 20:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.001 20:33:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:04.001 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:04.001 { 00:20:04.001 "cntlid": 127, 00:20:04.001 "qid": 0, 00:20:04.001 "state": "enabled", 00:20:04.001 "thread": "nvmf_tgt_poll_group_000", 00:20:04.001 "listen_address": { 00:20:04.001 "trtype": "TCP", 00:20:04.001 "adrfam": "IPv4", 00:20:04.001 "traddr": "10.0.0.2", 00:20:04.001 "trsvcid": "4420" 00:20:04.001 }, 00:20:04.001 "peer_address": { 00:20:04.001 "trtype": "TCP", 00:20:04.001 "adrfam": "IPv4", 00:20:04.001 "traddr": "10.0.0.1", 00:20:04.001 "trsvcid": "48562" 00:20:04.001 }, 00:20:04.001 "auth": { 00:20:04.001 "state": "completed", 00:20:04.001 "digest": "sha512", 00:20:04.001 "dhgroup": "ffdhe4096" 00:20:04.001 } 00:20:04.001 } 00:20:04.001 ]' 00:20:04.001 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.262 20:33:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.299 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.299 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.561 00:20:05.561 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:05.561 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:05.561 20:33:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:05.822 { 00:20:05.822 "cntlid": 129, 00:20:05.822 "qid": 0, 00:20:05.822 "state": "enabled", 00:20:05.822 "thread": "nvmf_tgt_poll_group_000", 00:20:05.822 "listen_address": { 00:20:05.822 "trtype": "TCP", 00:20:05.822 "adrfam": "IPv4", 00:20:05.822 "traddr": "10.0.0.2", 00:20:05.822 "trsvcid": "4420" 00:20:05.822 }, 00:20:05.822 "peer_address": { 00:20:05.822 "trtype": "TCP", 00:20:05.822 "adrfam": "IPv4", 00:20:05.822 "traddr": "10.0.0.1", 00:20:05.822 "trsvcid": "48582" 00:20:05.822 }, 00:20:05.822 "auth": { 00:20:05.822 "state": "completed", 00:20:05.822 "digest": "sha512", 00:20:05.822 "dhgroup": "ffdhe6144" 00:20:05.822 } 00:20:05.822 } 00:20:05.822 ]' 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.822 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.083 20:33:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.026 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.026 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.287 00:20:07.287 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:07.287 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:07.287 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:07.549 { 00:20:07.549 "cntlid": 131, 00:20:07.549 "qid": 0, 00:20:07.549 "state": "enabled", 00:20:07.549 "thread": "nvmf_tgt_poll_group_000", 00:20:07.549 "listen_address": { 00:20:07.549 "trtype": "TCP", 00:20:07.549 "adrfam": "IPv4", 00:20:07.549 "traddr": "10.0.0.2", 00:20:07.549 "trsvcid": "4420" 00:20:07.549 }, 00:20:07.549 "peer_address": { 00:20:07.549 "trtype": "TCP", 00:20:07.549 "adrfam": "IPv4", 00:20:07.549 "traddr": "10.0.0.1", 00:20:07.549 "trsvcid": "48614" 00:20:07.549 }, 00:20:07.549 "auth": { 00:20:07.549 "state": "completed", 00:20:07.549 "digest": "sha512", 00:20:07.549 "dhgroup": "ffdhe6144" 00:20:07.549 } 00:20:07.549 } 00:20:07.549 ]' 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.549 20:33:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:07.809 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:08.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:08.751 20:34:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.012 00:20:09.012 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.012 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.012 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.275 { 00:20:09.275 "cntlid": 133, 00:20:09.275 "qid": 0, 00:20:09.275 "state": "enabled", 00:20:09.275 "thread": "nvmf_tgt_poll_group_000", 00:20:09.275 "listen_address": { 00:20:09.275 "trtype": "TCP", 00:20:09.275 "adrfam": "IPv4", 00:20:09.275 "traddr": "10.0.0.2", 00:20:09.275 "trsvcid": "4420" 00:20:09.275 }, 00:20:09.275 "peer_address": { 00:20:09.275 "trtype": "TCP", 00:20:09.275 "adrfam": "IPv4", 00:20:09.275 "traddr": "10.0.0.1", 00:20:09.275 "trsvcid": "48636" 00:20:09.275 }, 00:20:09.275 "auth": { 00:20:09.275 "state": "completed", 00:20:09.275 "digest": "sha512", 00:20:09.275 "dhgroup": "ffdhe6144" 00:20:09.275 } 00:20:09.275 } 00:20:09.275 ]' 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.275 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.536 20:34:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.480 20:34:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:10.741 00:20:10.741 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:10.741 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:10.741 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.003 { 00:20:11.003 "cntlid": 135, 00:20:11.003 "qid": 0, 00:20:11.003 "state": "enabled", 00:20:11.003 "thread": "nvmf_tgt_poll_group_000", 00:20:11.003 "listen_address": { 00:20:11.003 "trtype": "TCP", 00:20:11.003 "adrfam": "IPv4", 00:20:11.003 "traddr": "10.0.0.2", 00:20:11.003 "trsvcid": "4420" 00:20:11.003 }, 00:20:11.003 "peer_address": { 00:20:11.003 "trtype": "TCP", 00:20:11.003 "adrfam": "IPv4", 00:20:11.003 "traddr": "10.0.0.1", 00:20:11.003 "trsvcid": "48656" 00:20:11.003 }, 00:20:11.003 "auth": { 00:20:11.003 "state": "completed", 00:20:11.003 "digest": "sha512", 00:20:11.003 "dhgroup": "ffdhe6144" 00:20:11.003 } 00:20:11.003 } 00:20:11.003 ]' 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.003 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:11.264 20:34:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:20:11.836 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.836 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.836 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:11.836 20:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.836 20:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.836 20:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.836 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:11.836 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.097 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:12.670 00:20:12.670 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:12.670 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:12.670 20:34:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:12.931 { 00:20:12.931 "cntlid": 137, 00:20:12.931 "qid": 0, 00:20:12.931 "state": "enabled", 00:20:12.931 "thread": "nvmf_tgt_poll_group_000", 00:20:12.931 "listen_address": { 00:20:12.931 "trtype": "TCP", 00:20:12.931 "adrfam": "IPv4", 00:20:12.931 "traddr": "10.0.0.2", 00:20:12.931 "trsvcid": "4420" 00:20:12.931 }, 00:20:12.931 "peer_address": { 00:20:12.931 "trtype": "TCP", 00:20:12.931 "adrfam": "IPv4", 00:20:12.931 "traddr": "10.0.0.1", 00:20:12.931 "trsvcid": "36952" 00:20:12.931 }, 00:20:12.931 "auth": { 00:20:12.931 "state": "completed", 00:20:12.931 "digest": "sha512", 00:20:12.931 "dhgroup": "ffdhe8192" 00:20:12.931 } 00:20:12.931 } 00:20:12.931 ]' 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.931 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.191 20:34:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.763 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:13.763 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.024 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:14.595 00:20:14.595 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.595 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.595 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.856 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.856 20:34:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.856 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.856 20:34:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.856 { 00:20:14.856 "cntlid": 139, 00:20:14.856 "qid": 0, 00:20:14.856 "state": "enabled", 00:20:14.856 "thread": "nvmf_tgt_poll_group_000", 00:20:14.856 "listen_address": { 00:20:14.856 "trtype": "TCP", 00:20:14.856 "adrfam": "IPv4", 00:20:14.856 "traddr": "10.0.0.2", 00:20:14.856 "trsvcid": "4420" 00:20:14.856 }, 00:20:14.856 "peer_address": { 00:20:14.856 "trtype": "TCP", 00:20:14.856 "adrfam": "IPv4", 00:20:14.856 "traddr": "10.0.0.1", 00:20:14.856 "trsvcid": "36984" 00:20:14.856 }, 00:20:14.856 "auth": { 00:20:14.856 "state": "completed", 00:20:14.856 "digest": "sha512", 00:20:14.856 "dhgroup": "ffdhe8192" 00:20:14.856 } 00:20:14.856 } 00:20:14.856 ]' 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.856 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.116 20:34:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:01:MWQ0YTExZjU0M2QzMDFlMmM3ZjI0ZGIzOWM2ZmIyOTkmoHSN: --dhchap-ctrl-secret DHHC-1:02:YzU5MjU2ZjA1NTM4YTM0NDI2NmUwODRiMzVmNDYxNWM1NmQwNjRlMjAxNjg4YWM3/Ym13Q==: 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:15.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.687 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.947 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:16.520 00:20:16.520 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:16.520 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:16.520 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:16.781 { 00:20:16.781 "cntlid": 141, 00:20:16.781 "qid": 0, 00:20:16.781 "state": "enabled", 00:20:16.781 "thread": "nvmf_tgt_poll_group_000", 00:20:16.781 "listen_address": { 00:20:16.781 "trtype": "TCP", 00:20:16.781 "adrfam": "IPv4", 00:20:16.781 "traddr": "10.0.0.2", 00:20:16.781 "trsvcid": "4420" 00:20:16.781 }, 00:20:16.781 "peer_address": { 00:20:16.781 "trtype": "TCP", 00:20:16.781 "adrfam": "IPv4", 00:20:16.781 "traddr": "10.0.0.1", 00:20:16.781 "trsvcid": "37014" 00:20:16.781 }, 00:20:16.781 "auth": { 00:20:16.781 "state": "completed", 00:20:16.781 "digest": "sha512", 00:20:16.781 "dhgroup": "ffdhe8192" 00:20:16.781 } 00:20:16.781 } 00:20:16.781 ]' 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.781 20:34:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:16.781 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:16.781 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:16.781 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.781 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.781 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.043 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:02:MWY3NDViNmUzNTMxZTkwMWFlNWQzY2I5MGQwNGNhOGI0NDYyZmJjZDcwZmNlYjc1L9A9wA==: --dhchap-ctrl-secret DHHC-1:01:NDJiZDk3MTY5MDMwMzlhOTllYzIwNWM1NWFmOGQxZWEZim2j: 00:20:17.615 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.615 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:17.615 20:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.616 20:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.616 20:34:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.616 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:17.616 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.616 20:34:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.878 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:18.450 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:18.450 { 00:20:18.450 "cntlid": 143, 00:20:18.450 "qid": 0, 00:20:18.450 "state": "enabled", 00:20:18.450 "thread": "nvmf_tgt_poll_group_000", 00:20:18.450 "listen_address": { 00:20:18.450 "trtype": "TCP", 00:20:18.450 "adrfam": "IPv4", 00:20:18.450 "traddr": "10.0.0.2", 00:20:18.450 "trsvcid": "4420" 00:20:18.450 }, 00:20:18.450 "peer_address": { 00:20:18.450 "trtype": "TCP", 00:20:18.450 "adrfam": "IPv4", 00:20:18.450 "traddr": "10.0.0.1", 00:20:18.450 "trsvcid": "37030" 00:20:18.450 }, 00:20:18.450 "auth": { 00:20:18.450 "state": "completed", 00:20:18.450 "digest": "sha512", 00:20:18.450 "dhgroup": "ffdhe8192" 00:20:18.450 } 00:20:18.450 } 00:20:18.450 ]' 00:20:18.450 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:18.711 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:18.711 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:18.711 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:18.711 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:18.711 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.711 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.711 20:34:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.972 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:19.547 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.808 20:34:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:20.378 00:20:20.378 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:20.378 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:20.378 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.378 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:20.379 { 00:20:20.379 "cntlid": 145, 00:20:20.379 "qid": 0, 00:20:20.379 "state": "enabled", 00:20:20.379 "thread": "nvmf_tgt_poll_group_000", 00:20:20.379 "listen_address": { 00:20:20.379 "trtype": "TCP", 00:20:20.379 "adrfam": "IPv4", 00:20:20.379 "traddr": "10.0.0.2", 00:20:20.379 "trsvcid": "4420" 00:20:20.379 }, 00:20:20.379 "peer_address": { 00:20:20.379 "trtype": "TCP", 00:20:20.379 "adrfam": "IPv4", 00:20:20.379 "traddr": "10.0.0.1", 00:20:20.379 "trsvcid": "37072" 00:20:20.379 }, 00:20:20.379 "auth": { 00:20:20.379 "state": "completed", 00:20:20.379 "digest": "sha512", 00:20:20.379 "dhgroup": "ffdhe8192" 00:20:20.379 } 00:20:20.379 } 00:20:20.379 ]' 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:20.379 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:20.639 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:20.639 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:20.639 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.639 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.639 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.639 20:34:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:00:YTYwYzQzMzBkMjVhMDYxMDNmYTE3OTcxYjYzYzlhMWQ2NmZmMjliOGEyYmUyZTUxswjCTA==: --dhchap-ctrl-secret DHHC-1:03:YTNiNzcwMmQ2YjE5YzcxZGYwMDlhMDMzNmExYzE0N2VlYzExYTZlYzAwYjU2ZDU3NjM2YzYyODg2Mjk3MDUyZAzC/fY=: 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:21.580 20:34:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:20:21.840 request: 00:20:21.841 { 00:20:21.841 "name": "nvme0", 00:20:21.841 "trtype": "tcp", 00:20:21.841 "traddr": "10.0.0.2", 00:20:21.841 "adrfam": "ipv4", 00:20:21.841 "trsvcid": "4420", 00:20:21.841 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:21.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:21.841 "prchk_reftag": false, 00:20:21.841 "prchk_guard": false, 00:20:21.841 "hdgst": false, 00:20:21.841 "ddgst": false, 00:20:21.841 "dhchap_key": "key2", 00:20:21.841 "method": "bdev_nvme_attach_controller", 00:20:21.841 "req_id": 1 00:20:21.841 } 00:20:21.841 Got JSON-RPC error response 00:20:21.841 response: 00:20:21.841 { 00:20:21.841 "code": -5, 00:20:21.841 "message": "Input/output error" 00:20:21.841 } 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:21.841 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:22.410 request: 00:20:22.410 { 00:20:22.410 "name": "nvme0", 00:20:22.410 "trtype": "tcp", 00:20:22.410 "traddr": "10.0.0.2", 00:20:22.410 "adrfam": "ipv4", 00:20:22.410 "trsvcid": "4420", 00:20:22.410 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:22.410 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:22.410 "prchk_reftag": false, 00:20:22.410 "prchk_guard": false, 00:20:22.410 "hdgst": false, 00:20:22.410 "ddgst": false, 00:20:22.410 "dhchap_key": "key1", 00:20:22.410 "dhchap_ctrlr_key": "ckey2", 00:20:22.410 "method": "bdev_nvme_attach_controller", 00:20:22.410 "req_id": 1 00:20:22.410 } 00:20:22.410 Got JSON-RPC error response 00:20:22.410 response: 00:20:22.410 { 00:20:22.410 "code": -5, 00:20:22.410 "message": "Input/output error" 00:20:22.410 } 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key1 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:22.410 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:22.411 20:34:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.411 20:34:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:22.979 request: 00:20:22.979 { 00:20:22.979 "name": "nvme0", 00:20:22.979 "trtype": "tcp", 00:20:22.979 "traddr": "10.0.0.2", 00:20:22.979 "adrfam": "ipv4", 00:20:22.979 "trsvcid": "4420", 00:20:22.979 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:22.979 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:22.979 "prchk_reftag": false, 00:20:22.979 "prchk_guard": false, 00:20:22.979 "hdgst": false, 00:20:22.979 "ddgst": false, 00:20:22.979 "dhchap_key": "key1", 00:20:22.979 "dhchap_ctrlr_key": "ckey1", 00:20:22.979 "method": "bdev_nvme_attach_controller", 00:20:22.979 "req_id": 1 00:20:22.979 } 00:20:22.979 Got JSON-RPC error response 00:20:22.979 response: 00:20:22.979 { 00:20:22.979 "code": -5, 00:20:22.979 "message": "Input/output error" 00:20:22.979 } 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 1332759 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1332759 ']' 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1332759 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1332759 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1332759' 00:20:22.979 killing process with pid 1332759 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1332759 00:20:22.979 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1332759 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=1358739 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 1358739 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1358739 ']' 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:23.239 20:34:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 1358739 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 1358739 ']' 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:24.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.177 20:34:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:24.745 00:20:24.745 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:24.745 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:24.745 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:25.003 { 00:20:25.003 "cntlid": 1, 00:20:25.003 "qid": 0, 00:20:25.003 "state": "enabled", 00:20:25.003 "thread": "nvmf_tgt_poll_group_000", 00:20:25.003 "listen_address": { 00:20:25.003 "trtype": "TCP", 00:20:25.003 "adrfam": "IPv4", 00:20:25.003 "traddr": "10.0.0.2", 00:20:25.003 "trsvcid": "4420" 00:20:25.003 }, 00:20:25.003 "peer_address": { 00:20:25.003 "trtype": "TCP", 00:20:25.003 "adrfam": "IPv4", 00:20:25.003 "traddr": "10.0.0.1", 00:20:25.003 "trsvcid": "41490" 00:20:25.003 }, 00:20:25.003 "auth": { 00:20:25.003 "state": "completed", 00:20:25.003 "digest": "sha512", 00:20:25.003 "dhgroup": "ffdhe8192" 00:20:25.003 } 00:20:25.003 } 00:20:25.003 ]' 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:25.003 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:25.263 20:34:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid 00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-secret DHHC-1:03:OGVjMDE5NzNiNTY5MTE1NGY1NmY4M2YyN2QwNjViMTI5MjI0NTU0YjZiNWM2MDg4MDg3ZDYwMDZkYWFmYWNiM6UT9W8=: 00:20:25.833 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:26.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --dhchap-key key3 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.095 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.356 request: 00:20:26.356 { 00:20:26.356 "name": "nvme0", 00:20:26.356 "trtype": "tcp", 00:20:26.356 "traddr": "10.0.0.2", 00:20:26.356 "adrfam": "ipv4", 00:20:26.356 "trsvcid": "4420", 00:20:26.356 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:26.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:26.356 "prchk_reftag": false, 00:20:26.356 "prchk_guard": false, 00:20:26.356 "hdgst": false, 00:20:26.356 "ddgst": false, 00:20:26.356 "dhchap_key": "key3", 00:20:26.356 "method": "bdev_nvme_attach_controller", 00:20:26.356 "req_id": 1 00:20:26.356 } 00:20:26.356 Got JSON-RPC error response 00:20:26.356 response: 00:20:26.356 { 00:20:26.356 "code": -5, 00:20:26.356 "message": "Input/output error" 00:20:26.356 } 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.356 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:26.617 request: 00:20:26.617 { 00:20:26.617 "name": "nvme0", 00:20:26.617 "trtype": "tcp", 00:20:26.617 "traddr": "10.0.0.2", 00:20:26.617 "adrfam": "ipv4", 00:20:26.617 "trsvcid": "4420", 00:20:26.617 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:26.617 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:26.617 "prchk_reftag": false, 00:20:26.617 "prchk_guard": false, 00:20:26.617 "hdgst": false, 00:20:26.617 "ddgst": false, 00:20:26.617 "dhchap_key": "key3", 00:20:26.617 "method": "bdev_nvme_attach_controller", 00:20:26.617 "req_id": 1 00:20:26.617 } 00:20:26.617 Got JSON-RPC error response 00:20:26.617 response: 00:20:26.617 { 00:20:26.617 "code": -5, 00:20:26.617 "message": "Input/output error" 00:20:26.617 } 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.617 20:34:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:26.878 request: 00:20:26.878 { 00:20:26.878 "name": "nvme0", 00:20:26.878 "trtype": "tcp", 00:20:26.878 "traddr": "10.0.0.2", 00:20:26.878 "adrfam": "ipv4", 00:20:26.878 "trsvcid": "4420", 00:20:26.878 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:26.878 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396", 00:20:26.878 "prchk_reftag": false, 00:20:26.878 "prchk_guard": false, 00:20:26.878 "hdgst": false, 00:20:26.878 "ddgst": false, 00:20:26.878 "dhchap_key": "key0", 00:20:26.878 "dhchap_ctrlr_key": "key1", 00:20:26.878 "method": "bdev_nvme_attach_controller", 00:20:26.878 "req_id": 1 00:20:26.878 } 00:20:26.878 Got JSON-RPC error response 00:20:26.878 response: 00:20:26.878 { 00:20:26.878 "code": -5, 00:20:26.878 "message": "Input/output error" 00:20:26.878 } 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:26.878 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:20:27.139 00:20:27.139 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:20:27.139 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:20:27.139 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.399 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.399 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.399 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1332812 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1332812 ']' 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1332812 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1332812 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1332812' 00:20:27.659 killing process with pid 1332812 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1332812 00:20:27.659 20:34:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1332812 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.920 rmmod nvme_tcp 00:20:27.920 rmmod nvme_fabrics 00:20:27.920 rmmod nvme_keyring 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 1358739 ']' 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 1358739 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 1358739 ']' 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 1358739 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1358739 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1358739' 00:20:27.920 killing process with pid 1358739 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 1358739 00:20:27.920 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 1358739 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:28.180 20:34:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.091 20:34:22 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:30.091 20:34:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.ZED /tmp/spdk.key-sha256.Bn8 /tmp/spdk.key-sha384.xSG /tmp/spdk.key-sha512.Cl0 /tmp/spdk.key-sha512.zyI /tmp/spdk.key-sha384.KCJ /tmp/spdk.key-sha256.rvi '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:30.091 00:20:30.091 real 2m21.652s 00:20:30.091 user 5m13.003s 00:20:30.091 sys 0m19.793s 00:20:30.091 20:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:30.091 20:34:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.091 ************************************ 00:20:30.091 END TEST nvmf_auth_target 00:20:30.091 ************************************ 00:20:30.091 20:34:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:30.091 20:34:22 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:20:30.091 20:34:22 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:30.091 20:34:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:20:30.091 20:34:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:30.091 20:34:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:30.352 ************************************ 00:20:30.352 START TEST nvmf_bdevio_no_huge 00:20:30.352 ************************************ 00:20:30.352 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:30.352 * Looking for test storage... 00:20:30.352 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.352 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.352 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:30.352 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.352 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.352 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:20:30.353 20:34:22 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:38.494 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:38.494 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:38.494 Found net devices under 0000:31:00.0: cvl_0_0 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:38.494 Found net devices under 0000:31:00.1: cvl_0_1 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:38.494 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.494 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.712 ms 00:20:38.494 00:20:38.494 --- 10.0.0.2 ping statistics --- 00:20:38.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.494 rtt min/avg/max/mdev = 0.712/0.712/0.712/0.000 ms 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:38.494 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.494 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.169 ms 00:20:38.494 00:20:38.494 --- 10.0.0.1 ping statistics --- 00:20:38.494 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.494 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.494 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=1364446 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 1364446 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 1364446 ']' 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:38.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:38.495 20:34:30 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.495 [2024-07-15 20:34:30.738945] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:20:38.495 [2024-07-15 20:34:30.739016] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:38.495 [2024-07-15 20:34:30.842062] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:38.757 [2024-07-15 20:34:30.948363] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:38.757 [2024-07-15 20:34:30.948416] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:38.757 [2024-07-15 20:34:30.948424] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:38.757 [2024-07-15 20:34:30.948431] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:38.757 [2024-07-15 20:34:30.948437] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:38.757 [2024-07-15 20:34:30.948594] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:20:38.757 [2024-07-15 20:34:30.948745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:20:38.757 [2024-07-15 20:34:30.948904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:38.757 [2024-07-15 20:34:30.948904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.331 [2024-07-15 20:34:31.579008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.331 Malloc0 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:39.331 [2024-07-15 20:34:31.632547] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:20:39.331 { 00:20:39.331 "params": { 00:20:39.331 "name": "Nvme$subsystem", 00:20:39.331 "trtype": "$TEST_TRANSPORT", 00:20:39.331 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:39.331 "adrfam": "ipv4", 00:20:39.331 "trsvcid": "$NVMF_PORT", 00:20:39.331 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:39.331 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:39.331 "hdgst": ${hdgst:-false}, 00:20:39.331 "ddgst": ${ddgst:-false} 00:20:39.331 }, 00:20:39.331 "method": "bdev_nvme_attach_controller" 00:20:39.331 } 00:20:39.331 EOF 00:20:39.331 )") 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:20:39.331 20:34:31 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:20:39.331 "params": { 00:20:39.331 "name": "Nvme1", 00:20:39.332 "trtype": "tcp", 00:20:39.332 "traddr": "10.0.0.2", 00:20:39.332 "adrfam": "ipv4", 00:20:39.332 "trsvcid": "4420", 00:20:39.332 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:39.332 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:39.332 "hdgst": false, 00:20:39.332 "ddgst": false 00:20:39.332 }, 00:20:39.332 "method": "bdev_nvme_attach_controller" 00:20:39.332 }' 00:20:39.332 [2024-07-15 20:34:31.690036] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:20:39.332 [2024-07-15 20:34:31.690103] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1364703 ] 00:20:39.641 [2024-07-15 20:34:31.764964] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:39.641 [2024-07-15 20:34:31.861619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.641 [2024-07-15 20:34:31.861744] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.641 [2024-07-15 20:34:31.861747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:39.641 I/O targets: 00:20:39.641 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:39.641 00:20:39.641 00:20:39.641 CUnit - A unit testing framework for C - Version 2.1-3 00:20:39.641 http://cunit.sourceforge.net/ 00:20:39.641 00:20:39.641 00:20:39.641 Suite: bdevio tests on: Nvme1n1 00:20:39.940 Test: blockdev write read block ...passed 00:20:39.940 Test: blockdev write zeroes read block ...passed 00:20:39.940 Test: blockdev write zeroes read no split ...passed 00:20:39.940 Test: blockdev write zeroes read split ...passed 00:20:39.940 Test: blockdev write zeroes read split partial ...passed 00:20:39.940 Test: blockdev reset ...[2024-07-15 20:34:32.217368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:39.940 [2024-07-15 20:34:32.217430] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1966970 (9): Bad file descriptor 00:20:39.940 [2024-07-15 20:34:32.231807] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:39.940 passed 00:20:39.940 Test: blockdev write read 8 blocks ...passed 00:20:39.940 Test: blockdev write read size > 128k ...passed 00:20:39.940 Test: blockdev write read invalid size ...passed 00:20:40.201 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:40.201 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:40.201 Test: blockdev write read max offset ...passed 00:20:40.201 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:40.201 Test: blockdev writev readv 8 blocks ...passed 00:20:40.201 Test: blockdev writev readv 30 x 1block ...passed 00:20:40.201 Test: blockdev writev readv block ...passed 00:20:40.201 Test: blockdev writev readv size > 128k ...passed 00:20:40.201 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:40.201 Test: blockdev comparev and writev ...[2024-07-15 20:34:32.538795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.538818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:40.201 [2024-07-15 20:34:32.538829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.538835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:40.201 [2024-07-15 20:34:32.539364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.539372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:40.201 [2024-07-15 20:34:32.539382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.539387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:40.201 [2024-07-15 20:34:32.539859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.539865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:40.201 [2024-07-15 20:34:32.539875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.539880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:40.201 [2024-07-15 20:34:32.540382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.540393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:40.201 [2024-07-15 20:34:32.540402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:40.201 [2024-07-15 20:34:32.540408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:40.463 passed 00:20:40.463 Test: blockdev nvme passthru rw ...passed 00:20:40.463 Test: blockdev nvme passthru vendor specific ...[2024-07-15 20:34:32.625184] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.463 [2024-07-15 20:34:32.625194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:40.463 [2024-07-15 20:34:32.625568] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.463 [2024-07-15 20:34:32.625575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:40.463 [2024-07-15 20:34:32.625940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.463 [2024-07-15 20:34:32.625946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:40.463 [2024-07-15 20:34:32.626314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:40.463 [2024-07-15 20:34:32.626321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:40.463 passed 00:20:40.463 Test: blockdev nvme admin passthru ...passed 00:20:40.463 Test: blockdev copy ...passed 00:20:40.463 00:20:40.463 Run Summary: Type Total Ran Passed Failed Inactive 00:20:40.463 suites 1 1 n/a 0 0 00:20:40.463 tests 23 23 23 0 0 00:20:40.463 asserts 152 152 152 0 n/a 00:20:40.463 00:20:40.463 Elapsed time = 1.368 seconds 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:40.724 20:34:32 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:40.724 rmmod nvme_tcp 00:20:40.724 rmmod nvme_fabrics 00:20:40.724 rmmod nvme_keyring 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 1364446 ']' 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 1364446 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 1364446 ']' 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 1364446 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1364446 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1364446' 00:20:40.724 killing process with pid 1364446 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 1364446 00:20:40.724 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 1364446 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:41.294 20:34:33 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.204 20:34:35 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:43.204 00:20:43.204 real 0m12.980s 00:20:43.204 user 0m13.999s 00:20:43.204 sys 0m6.917s 00:20:43.204 20:34:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:43.205 20:34:35 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:43.205 ************************************ 00:20:43.205 END TEST nvmf_bdevio_no_huge 00:20:43.205 ************************************ 00:20:43.205 20:34:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:20:43.205 20:34:35 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:43.205 20:34:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:43.205 20:34:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:43.205 20:34:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:43.205 ************************************ 00:20:43.205 START TEST nvmf_tls 00:20:43.205 ************************************ 00:20:43.205 20:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:43.465 * Looking for test storage... 00:20:43.465 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:20:43.465 20:34:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:20:51.611 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:20:51.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:20:51.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:20:51.612 Found net devices under 0000:31:00.0: cvl_0_0 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:20:51.612 Found net devices under 0000:31:00.1: cvl_0_1 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:51.612 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:51.612 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.687 ms 00:20:51.612 00:20:51.612 --- 10.0.0.2 ping statistics --- 00:20:51.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.612 rtt min/avg/max/mdev = 0.687/0.687/0.687/0.000 ms 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:51.612 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:51.612 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.330 ms 00:20:51.612 00:20:51.612 --- 10.0.0.1 ping statistics --- 00:20:51.612 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:51.612 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1369566 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1369566 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1369566 ']' 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:51.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:51.612 20:34:43 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:51.612 [2024-07-15 20:34:43.761847] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:20:51.612 [2024-07-15 20:34:43.761911] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:51.612 EAL: No free 2048 kB hugepages reported on node 1 00:20:51.612 [2024-07-15 20:34:43.861696] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:51.612 [2024-07-15 20:34:43.955032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:51.612 [2024-07-15 20:34:43.955086] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:51.612 [2024-07-15 20:34:43.955094] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:51.612 [2024-07-15 20:34:43.955101] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:51.612 [2024-07-15 20:34:43.955107] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:51.612 [2024-07-15 20:34:43.955132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:52.185 20:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:52.185 20:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:20:52.185 20:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:52.185 20:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:52.185 20:34:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:52.446 20:34:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:52.447 20:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:20:52.447 20:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:52.447 true 00:20:52.447 20:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.447 20:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:20:52.707 20:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:20:52.708 20:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:20:52.708 20:34:44 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:52.969 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:52.969 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:20:52.969 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:20:52.969 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:20:52.969 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:53.230 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.230 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:20:53.491 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:20:53.491 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:20:53.491 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.491 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:20:53.491 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:20:53.491 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:20:53.491 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:53.753 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:53.753 20:34:45 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:20:53.753 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:20:53.753 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:20:53.753 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:54.014 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:54.014 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:20:54.275 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.bGEqRH2nDw 00:20:54.276 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:54.276 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.XH7o1SiKFF 00:20:54.276 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:54.276 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:54.276 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.bGEqRH2nDw 00:20:54.276 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.XH7o1SiKFF 00:20:54.276 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:54.536 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:54.797 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.bGEqRH2nDw 00:20:54.797 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.bGEqRH2nDw 00:20:54.797 20:34:46 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:54.797 [2024-07-15 20:34:47.109582] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.797 20:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:55.059 20:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:55.059 [2024-07-15 20:34:47.414309] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:55.059 [2024-07-15 20:34:47.414479] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.059 20:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:55.320 malloc0 00:20:55.320 20:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:55.581 20:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bGEqRH2nDw 00:20:55.581 [2024-07-15 20:34:47.877444] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:55.581 20:34:47 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.bGEqRH2nDw 00:20:55.581 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.818 Initializing NVMe Controllers 00:21:07.818 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:07.818 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:07.818 Initialization complete. Launching workers. 00:21:07.818 ======================================================== 00:21:07.818 Latency(us) 00:21:07.818 Device Information : IOPS MiB/s Average min max 00:21:07.818 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 19046.75 74.40 3360.22 1063.41 4778.55 00:21:07.818 ======================================================== 00:21:07.818 Total : 19046.75 74.40 3360.22 1063.41 4778.55 00:21:07.818 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.bGEqRH2nDw 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bGEqRH2nDw' 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1372448 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1372448 /var/tmp/bdevperf.sock 00:21:07.818 20:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1372448 ']' 00:21:07.819 20:34:57 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:07.819 20:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.819 20:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:07.819 20:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.819 20:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:07.819 20:34:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:07.819 [2024-07-15 20:34:58.040807] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:07.819 [2024-07-15 20:34:58.040862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1372448 ] 00:21:07.819 EAL: No free 2048 kB hugepages reported on node 1 00:21:07.819 [2024-07-15 20:34:58.095225] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:07.819 [2024-07-15 20:34:58.147654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:07.819 20:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:07.819 20:34:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:07.819 20:34:58 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bGEqRH2nDw 00:21:07.819 [2024-07-15 20:34:58.964698] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:07.819 [2024-07-15 20:34:58.964755] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:07.819 TLSTESTn1 00:21:07.819 20:34:59 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:07.819 Running I/O for 10 seconds... 00:21:17.824 00:21:17.824 Latency(us) 00:21:17.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.824 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:17.824 Verification LBA range: start 0x0 length 0x2000 00:21:17.824 TLSTESTn1 : 10.04 3533.91 13.80 0.00 0.00 36163.94 4587.52 86507.52 00:21:17.824 =================================================================================================================== 00:21:17.824 Total : 3533.91 13.80 0.00 0.00 36163.94 4587.52 86507.52 00:21:17.824 0 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1372448 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1372448 ']' 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1372448 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1372448 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1372448' 00:21:17.824 killing process with pid 1372448 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1372448 00:21:17.824 Received shutdown signal, test time was about 10.000000 seconds 00:21:17.824 00:21:17.824 Latency(us) 00:21:17.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:17.824 =================================================================================================================== 00:21:17.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:17.824 [2024-07-15 20:35:09.287414] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1372448 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XH7o1SiKFF 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XH7o1SiKFF 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.XH7o1SiKFF 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.XH7o1SiKFF' 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1374560 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1374560 /var/tmp/bdevperf.sock 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1374560 ']' 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:17.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:17.824 20:35:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.825 [2024-07-15 20:35:09.460480] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:17.825 [2024-07-15 20:35:09.460535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1374560 ] 00:21:17.825 EAL: No free 2048 kB hugepages reported on node 1 00:21:17.825 [2024-07-15 20:35:09.516792] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.825 [2024-07-15 20:35:09.567842] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.XH7o1SiKFF 00:21:18.086 [2024-07-15 20:35:10.372997] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:18.086 [2024-07-15 20:35:10.373066] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:18.086 [2024-07-15 20:35:10.380541] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:18.086 [2024-07-15 20:35:10.380924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x711d80 (107): Transport endpoint is not connected 00:21:18.086 [2024-07-15 20:35:10.381920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x711d80 (9): Bad file descriptor 00:21:18.086 [2024-07-15 20:35:10.382921] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:18.086 [2024-07-15 20:35:10.382931] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:18.086 [2024-07-15 20:35:10.382938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:18.086 request: 00:21:18.086 { 00:21:18.086 "name": "TLSTEST", 00:21:18.086 "trtype": "tcp", 00:21:18.086 "traddr": "10.0.0.2", 00:21:18.086 "adrfam": "ipv4", 00:21:18.086 "trsvcid": "4420", 00:21:18.086 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:18.086 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:18.086 "prchk_reftag": false, 00:21:18.086 "prchk_guard": false, 00:21:18.086 "hdgst": false, 00:21:18.086 "ddgst": false, 00:21:18.086 "psk": "/tmp/tmp.XH7o1SiKFF", 00:21:18.086 "method": "bdev_nvme_attach_controller", 00:21:18.086 "req_id": 1 00:21:18.086 } 00:21:18.086 Got JSON-RPC error response 00:21:18.086 response: 00:21:18.086 { 00:21:18.086 "code": -5, 00:21:18.086 "message": "Input/output error" 00:21:18.086 } 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1374560 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1374560 ']' 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1374560 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:18.086 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1374560 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1374560' 00:21:18.348 killing process with pid 1374560 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1374560 00:21:18.348 Received shutdown signal, test time was about 10.000000 seconds 00:21:18.348 00:21:18.348 Latency(us) 00:21:18.348 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:18.348 =================================================================================================================== 00:21:18.348 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:18.348 [2024-07-15 20:35:10.467645] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1374560 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bGEqRH2nDw 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bGEqRH2nDw 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.bGEqRH2nDw 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bGEqRH2nDw' 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1374904 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1374904 /var/tmp/bdevperf.sock 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1374904 ']' 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:18.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:18.348 20:35:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:18.348 [2024-07-15 20:35:10.623805] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:18.348 [2024-07-15 20:35:10.623861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1374904 ] 00:21:18.348 EAL: No free 2048 kB hugepages reported on node 1 00:21:18.348 [2024-07-15 20:35:10.680193] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:18.609 [2024-07-15 20:35:10.731520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.179 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:19.179 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:19.179 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.bGEqRH2nDw 00:21:19.179 [2024-07-15 20:35:11.532673] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.179 [2024-07-15 20:35:11.532735] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:19.179 [2024-07-15 20:35:11.543294] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:19.179 [2024-07-15 20:35:11.543312] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:19.179 [2024-07-15 20:35:11.543332] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:19.179 [2024-07-15 20:35:11.543857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70cd80 (107): Transport endpoint is not connected 00:21:19.179 [2024-07-15 20:35:11.544852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x70cd80 (9): Bad file descriptor 00:21:19.179 [2024-07-15 20:35:11.545854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:19.179 [2024-07-15 20:35:11.545861] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:19.179 [2024-07-15 20:35:11.545868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:19.179 request: 00:21:19.179 { 00:21:19.179 "name": "TLSTEST", 00:21:19.179 "trtype": "tcp", 00:21:19.179 "traddr": "10.0.0.2", 00:21:19.179 "adrfam": "ipv4", 00:21:19.179 "trsvcid": "4420", 00:21:19.179 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:19.179 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:19.179 "prchk_reftag": false, 00:21:19.179 "prchk_guard": false, 00:21:19.179 "hdgst": false, 00:21:19.179 "ddgst": false, 00:21:19.179 "psk": "/tmp/tmp.bGEqRH2nDw", 00:21:19.179 "method": "bdev_nvme_attach_controller", 00:21:19.179 "req_id": 1 00:21:19.180 } 00:21:19.180 Got JSON-RPC error response 00:21:19.180 response: 00:21:19.180 { 00:21:19.180 "code": -5, 00:21:19.180 "message": "Input/output error" 00:21:19.180 } 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1374904 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1374904 ']' 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1374904 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1374904 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1374904' 00:21:19.440 killing process with pid 1374904 00:21:19.440 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1374904 00:21:19.441 Received shutdown signal, test time was about 10.000000 seconds 00:21:19.441 00:21:19.441 Latency(us) 00:21:19.441 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:19.441 =================================================================================================================== 00:21:19.441 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:19.441 [2024-07-15 20:35:11.634936] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1374904 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bGEqRH2nDw 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bGEqRH2nDw 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.bGEqRH2nDw 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.bGEqRH2nDw' 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1375092 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1375092 /var/tmp/bdevperf.sock 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1375092 ']' 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:19.441 20:35:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.441 [2024-07-15 20:35:11.791770] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:19.441 [2024-07-15 20:35:11.791823] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375092 ] 00:21:19.702 EAL: No free 2048 kB hugepages reported on node 1 00:21:19.702 [2024-07-15 20:35:11.848194] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.702 [2024-07-15 20:35:11.899082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:20.272 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:20.272 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:20.272 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.bGEqRH2nDw 00:21:20.533 [2024-07-15 20:35:12.700461] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:20.533 [2024-07-15 20:35:12.700522] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:20.533 [2024-07-15 20:35:12.706108] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:20.533 [2024-07-15 20:35:12.706124] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:20.533 [2024-07-15 20:35:12.706142] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:20.533 [2024-07-15 20:35:12.706700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4d80 (107): Transport endpoint is not connected 00:21:20.533 [2024-07-15 20:35:12.707695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22d4d80 (9): Bad file descriptor 00:21:20.533 [2024-07-15 20:35:12.708697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:20.533 [2024-07-15 20:35:12.708703] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:20.533 [2024-07-15 20:35:12.708710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:20.533 request: 00:21:20.533 { 00:21:20.533 "name": "TLSTEST", 00:21:20.533 "trtype": "tcp", 00:21:20.533 "traddr": "10.0.0.2", 00:21:20.533 "adrfam": "ipv4", 00:21:20.533 "trsvcid": "4420", 00:21:20.533 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:20.533 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:20.533 "prchk_reftag": false, 00:21:20.533 "prchk_guard": false, 00:21:20.533 "hdgst": false, 00:21:20.533 "ddgst": false, 00:21:20.533 "psk": "/tmp/tmp.bGEqRH2nDw", 00:21:20.533 "method": "bdev_nvme_attach_controller", 00:21:20.533 "req_id": 1 00:21:20.533 } 00:21:20.533 Got JSON-RPC error response 00:21:20.533 response: 00:21:20.533 { 00:21:20.533 "code": -5, 00:21:20.533 "message": "Input/output error" 00:21:20.533 } 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1375092 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1375092 ']' 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1375092 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1375092 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1375092' 00:21:20.533 killing process with pid 1375092 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1375092 00:21:20.533 Received shutdown signal, test time was about 10.000000 seconds 00:21:20.533 00:21:20.533 Latency(us) 00:21:20.533 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:20.533 =================================================================================================================== 00:21:20.533 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:20.533 [2024-07-15 20:35:12.795310] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1375092 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:20.533 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1375262 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1375262 /var/tmp/bdevperf.sock 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1375262 ']' 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:20.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:20.534 20:35:12 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:20.794 [2024-07-15 20:35:12.961279] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:20.794 [2024-07-15 20:35:12.961333] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375262 ] 00:21:20.794 EAL: No free 2048 kB hugepages reported on node 1 00:21:20.794 [2024-07-15 20:35:13.020946] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.794 [2024-07-15 20:35:13.072223] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.363 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:21.363 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:21.363 20:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:21.623 [2024-07-15 20:35:13.878995] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:21.623 [2024-07-15 20:35:13.880912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2024460 (9): Bad file descriptor 00:21:21.623 [2024-07-15 20:35:13.881911] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:21.623 [2024-07-15 20:35:13.881918] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:21.623 [2024-07-15 20:35:13.881925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:21.623 request: 00:21:21.623 { 00:21:21.623 "name": "TLSTEST", 00:21:21.623 "trtype": "tcp", 00:21:21.623 "traddr": "10.0.0.2", 00:21:21.623 "adrfam": "ipv4", 00:21:21.623 "trsvcid": "4420", 00:21:21.623 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.623 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.623 "prchk_reftag": false, 00:21:21.623 "prchk_guard": false, 00:21:21.623 "hdgst": false, 00:21:21.623 "ddgst": false, 00:21:21.623 "method": "bdev_nvme_attach_controller", 00:21:21.623 "req_id": 1 00:21:21.623 } 00:21:21.623 Got JSON-RPC error response 00:21:21.623 response: 00:21:21.623 { 00:21:21.623 "code": -5, 00:21:21.623 "message": "Input/output error" 00:21:21.623 } 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1375262 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1375262 ']' 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1375262 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1375262 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1375262' 00:21:21.623 killing process with pid 1375262 00:21:21.623 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1375262 00:21:21.623 Received shutdown signal, test time was about 10.000000 seconds 00:21:21.623 00:21:21.623 Latency(us) 00:21:21.623 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:21.623 =================================================================================================================== 00:21:21.624 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:21.624 20:35:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1375262 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 1369566 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1369566 ']' 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1369566 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1369566 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1369566' 00:21:21.884 killing process with pid 1369566 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1369566 00:21:21.884 [2024-07-15 20:35:14.128921] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1369566 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:21:21.884 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.Ngu1U6RrRP 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.Ngu1U6RrRP 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1375608 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1375608 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1375608 ']' 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:22.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:22.145 20:35:14 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:22.145 [2024-07-15 20:35:14.358880] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:22.145 [2024-07-15 20:35:14.358930] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:22.145 EAL: No free 2048 kB hugepages reported on node 1 00:21:22.145 [2024-07-15 20:35:14.447740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:22.145 [2024-07-15 20:35:14.500115] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:22.145 [2024-07-15 20:35:14.500152] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:22.145 [2024-07-15 20:35:14.500157] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:22.145 [2024-07-15 20:35:14.500161] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:22.145 [2024-07-15 20:35:14.500165] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:22.145 [2024-07-15 20:35:14.500181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.Ngu1U6RrRP 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ngu1U6RrRP 00:21:23.085 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:23.085 [2024-07-15 20:35:15.294587] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.086 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:23.086 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:23.345 [2024-07-15 20:35:15.587292] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:23.345 [2024-07-15 20:35:15.587450] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.345 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:23.606 malloc0 00:21:23.606 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:23.606 20:35:15 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ngu1U6RrRP 00:21:23.865 [2024-07-15 20:35:16.018130] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ngu1U6RrRP 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ngu1U6RrRP' 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1375972 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1375972 /var/tmp/bdevperf.sock 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1375972 ']' 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:23.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:23.865 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:23.865 [2024-07-15 20:35:16.065520] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:23.865 [2024-07-15 20:35:16.065568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1375972 ] 00:21:23.865 EAL: No free 2048 kB hugepages reported on node 1 00:21:23.865 [2024-07-15 20:35:16.121689] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:23.865 [2024-07-15 20:35:16.173576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.124 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:24.124 20:35:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:24.124 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ngu1U6RrRP 00:21:24.124 [2024-07-15 20:35:16.393602] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:24.124 [2024-07-15 20:35:16.393668] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:24.124 TLSTESTn1 00:21:24.124 20:35:16 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:24.384 Running I/O for 10 seconds... 00:21:34.382 00:21:34.382 Latency(us) 00:21:34.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.382 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:34.382 Verification LBA range: start 0x0 length 0x2000 00:21:34.382 TLSTESTn1 : 10.05 5580.42 21.80 0.00 0.00 22879.75 5434.03 86944.43 00:21:34.382 =================================================================================================================== 00:21:34.382 Total : 5580.42 21.80 0.00 0.00 22879.75 5434.03 86944.43 00:21:34.382 0 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 1375972 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1375972 ']' 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1375972 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1375972 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1375972' 00:21:34.382 killing process with pid 1375972 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1375972 00:21:34.382 Received shutdown signal, test time was about 10.000000 seconds 00:21:34.382 00:21:34.382 Latency(us) 00:21:34.382 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:34.382 =================================================================================================================== 00:21:34.382 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:34.382 [2024-07-15 20:35:26.724649] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:34.382 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1375972 00:21:34.642 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.Ngu1U6RrRP 00:21:34.642 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ngu1U6RrRP 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ngu1U6RrRP 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ngu1U6RrRP 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.Ngu1U6RrRP' 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1378003 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1378003 /var/tmp/bdevperf.sock 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1378003 ']' 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.643 20:35:26 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.643 [2024-07-15 20:35:26.893823] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:34.643 [2024-07-15 20:35:26.893891] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1378003 ] 00:21:34.643 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.643 [2024-07-15 20:35:26.963430] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.643 [2024-07-15 20:35:27.015345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ngu1U6RrRP 00:21:34.903 [2024-07-15 20:35:27.219372] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:34.903 [2024-07-15 20:35:27.219416] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:34.903 [2024-07-15 20:35:27.219421] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.Ngu1U6RrRP 00:21:34.903 request: 00:21:34.903 { 00:21:34.903 "name": "TLSTEST", 00:21:34.903 "trtype": "tcp", 00:21:34.903 "traddr": "10.0.0.2", 00:21:34.903 "adrfam": "ipv4", 00:21:34.903 "trsvcid": "4420", 00:21:34.903 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.903 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.903 "prchk_reftag": false, 00:21:34.903 "prchk_guard": false, 00:21:34.903 "hdgst": false, 00:21:34.903 "ddgst": false, 00:21:34.903 "psk": "/tmp/tmp.Ngu1U6RrRP", 00:21:34.903 "method": "bdev_nvme_attach_controller", 00:21:34.903 "req_id": 1 00:21:34.903 } 00:21:34.903 Got JSON-RPC error response 00:21:34.903 response: 00:21:34.903 { 00:21:34.903 "code": -1, 00:21:34.903 "message": "Operation not permitted" 00:21:34.903 } 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 1378003 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1378003 ']' 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1378003 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:34.903 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1378003 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1378003' 00:21:35.164 killing process with pid 1378003 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1378003 00:21:35.164 Received shutdown signal, test time was about 10.000000 seconds 00:21:35.164 00:21:35.164 Latency(us) 00:21:35.164 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:35.164 =================================================================================================================== 00:21:35.164 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1378003 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 1375608 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1375608 ']' 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1375608 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1375608 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1375608' 00:21:35.164 killing process with pid 1375608 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1375608 00:21:35.164 [2024-07-15 20:35:27.466471] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:35.164 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1375608 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1378302 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1378302 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1378302 ']' 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:35.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.424 20:35:27 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.424 [2024-07-15 20:35:27.619380] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:35.424 [2024-07-15 20:35:27.619423] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:35.424 EAL: No free 2048 kB hugepages reported on node 1 00:21:35.425 [2024-07-15 20:35:27.697294] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.425 [2024-07-15 20:35:27.750248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:35.425 [2024-07-15 20:35:27.750280] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:35.425 [2024-07-15 20:35:27.750285] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:35.425 [2024-07-15 20:35:27.750290] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:35.425 [2024-07-15 20:35:27.750294] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:35.425 [2024-07-15 20:35:27.750313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.Ngu1U6RrRP 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.Ngu1U6RrRP 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.Ngu1U6RrRP 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ngu1U6RrRP 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:36.365 [2024-07-15 20:35:28.584669] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.365 20:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:36.626 20:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:36.626 [2024-07-15 20:35:28.897437] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.626 [2024-07-15 20:35:28.897607] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.626 20:35:28 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:36.917 malloc0 00:21:36.917 20:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:36.917 20:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ngu1U6RrRP 00:21:37.246 [2024-07-15 20:35:29.316506] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:21:37.246 [2024-07-15 20:35:29.316526] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:21:37.246 [2024-07-15 20:35:29.316545] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:37.246 request: 00:21:37.246 { 00:21:37.246 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.246 "host": "nqn.2016-06.io.spdk:host1", 00:21:37.246 "psk": "/tmp/tmp.Ngu1U6RrRP", 00:21:37.246 "method": "nvmf_subsystem_add_host", 00:21:37.246 "req_id": 1 00:21:37.246 } 00:21:37.246 Got JSON-RPC error response 00:21:37.246 response: 00:21:37.246 { 00:21:37.246 "code": -32603, 00:21:37.246 "message": "Internal error" 00:21:37.246 } 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 1378302 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1378302 ']' 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1378302 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1378302 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1378302' 00:21:37.246 killing process with pid 1378302 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1378302 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1378302 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.Ngu1U6RrRP 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1378706 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1378706 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1378706 ']' 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:37.246 20:35:29 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.246 [2024-07-15 20:35:29.569517] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:37.246 [2024-07-15 20:35:29.569572] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.246 EAL: No free 2048 kB hugepages reported on node 1 00:21:37.506 [2024-07-15 20:35:29.656144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.506 [2024-07-15 20:35:29.708619] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.506 [2024-07-15 20:35:29.708652] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.506 [2024-07-15 20:35:29.708661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.506 [2024-07-15 20:35:29.708665] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.506 [2024-07-15 20:35:29.708669] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.506 [2024-07-15 20:35:29.708687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.Ngu1U6RrRP 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ngu1U6RrRP 00:21:38.076 20:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:38.359 [2024-07-15 20:35:30.510898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:38.359 20:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:38.359 20:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:38.620 [2024-07-15 20:35:30.811630] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:38.620 [2024-07-15 20:35:30.811784] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.620 20:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:38.620 malloc0 00:21:38.620 20:35:30 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:38.880 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ngu1U6RrRP 00:21:38.880 [2024-07-15 20:35:31.254599] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:39.140 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:39.140 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=1379071 00:21:39.140 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:39.140 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 1379071 /var/tmp/bdevperf.sock 00:21:39.140 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1379071 ']' 00:21:39.140 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:39.140 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:39.141 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:39.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:39.141 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:39.141 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:39.141 [2024-07-15 20:35:31.298938] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:39.141 [2024-07-15 20:35:31.298991] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379071 ] 00:21:39.141 EAL: No free 2048 kB hugepages reported on node 1 00:21:39.141 [2024-07-15 20:35:31.354943] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:39.141 [2024-07-15 20:35:31.406896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:39.141 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:39.141 20:35:31 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:39.141 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ngu1U6RrRP 00:21:39.400 [2024-07-15 20:35:31.626699] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:39.400 [2024-07-15 20:35:31.626763] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:39.400 TLSTESTn1 00:21:39.400 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:39.661 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:21:39.661 "subsystems": [ 00:21:39.661 { 00:21:39.661 "subsystem": "keyring", 00:21:39.661 "config": [] 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "subsystem": "iobuf", 00:21:39.661 "config": [ 00:21:39.661 { 00:21:39.661 "method": "iobuf_set_options", 00:21:39.661 "params": { 00:21:39.661 "small_pool_count": 8192, 00:21:39.661 "large_pool_count": 1024, 00:21:39.661 "small_bufsize": 8192, 00:21:39.661 "large_bufsize": 135168 00:21:39.661 } 00:21:39.661 } 00:21:39.661 ] 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "subsystem": "sock", 00:21:39.661 "config": [ 00:21:39.661 { 00:21:39.661 "method": "sock_set_default_impl", 00:21:39.661 "params": { 00:21:39.661 "impl_name": "posix" 00:21:39.661 } 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "method": "sock_impl_set_options", 00:21:39.661 "params": { 00:21:39.661 "impl_name": "ssl", 00:21:39.661 "recv_buf_size": 4096, 00:21:39.661 "send_buf_size": 4096, 00:21:39.661 "enable_recv_pipe": true, 00:21:39.661 "enable_quickack": false, 00:21:39.661 "enable_placement_id": 0, 00:21:39.661 "enable_zerocopy_send_server": true, 00:21:39.661 "enable_zerocopy_send_client": false, 00:21:39.661 "zerocopy_threshold": 0, 00:21:39.661 "tls_version": 0, 00:21:39.661 "enable_ktls": false 00:21:39.661 } 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "method": "sock_impl_set_options", 00:21:39.661 "params": { 00:21:39.661 "impl_name": "posix", 00:21:39.661 "recv_buf_size": 2097152, 00:21:39.661 "send_buf_size": 2097152, 00:21:39.661 "enable_recv_pipe": true, 00:21:39.661 "enable_quickack": false, 00:21:39.661 "enable_placement_id": 0, 00:21:39.661 "enable_zerocopy_send_server": true, 00:21:39.661 "enable_zerocopy_send_client": false, 00:21:39.661 "zerocopy_threshold": 0, 00:21:39.661 "tls_version": 0, 00:21:39.661 "enable_ktls": false 00:21:39.661 } 00:21:39.661 } 00:21:39.661 ] 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "subsystem": "vmd", 00:21:39.661 "config": [] 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "subsystem": "accel", 00:21:39.661 "config": [ 00:21:39.661 { 00:21:39.661 "method": "accel_set_options", 00:21:39.661 "params": { 00:21:39.661 "small_cache_size": 128, 00:21:39.661 "large_cache_size": 16, 00:21:39.661 "task_count": 2048, 00:21:39.661 "sequence_count": 2048, 00:21:39.661 "buf_count": 2048 00:21:39.661 } 00:21:39.661 } 00:21:39.661 ] 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "subsystem": "bdev", 00:21:39.661 "config": [ 00:21:39.661 { 00:21:39.661 "method": "bdev_set_options", 00:21:39.661 "params": { 00:21:39.661 "bdev_io_pool_size": 65535, 00:21:39.661 "bdev_io_cache_size": 256, 00:21:39.661 "bdev_auto_examine": true, 00:21:39.661 "iobuf_small_cache_size": 128, 00:21:39.661 "iobuf_large_cache_size": 16 00:21:39.661 } 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "method": "bdev_raid_set_options", 00:21:39.661 "params": { 00:21:39.661 "process_window_size_kb": 1024 00:21:39.661 } 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "method": "bdev_iscsi_set_options", 00:21:39.661 "params": { 00:21:39.661 "timeout_sec": 30 00:21:39.661 } 00:21:39.661 }, 00:21:39.661 { 00:21:39.661 "method": "bdev_nvme_set_options", 00:21:39.661 "params": { 00:21:39.661 "action_on_timeout": "none", 00:21:39.661 "timeout_us": 0, 00:21:39.661 "timeout_admin_us": 0, 00:21:39.661 "keep_alive_timeout_ms": 10000, 00:21:39.661 "arbitration_burst": 0, 00:21:39.661 "low_priority_weight": 0, 00:21:39.661 "medium_priority_weight": 0, 00:21:39.661 "high_priority_weight": 0, 00:21:39.661 "nvme_adminq_poll_period_us": 10000, 00:21:39.661 "nvme_ioq_poll_period_us": 0, 00:21:39.661 "io_queue_requests": 0, 00:21:39.661 "delay_cmd_submit": true, 00:21:39.661 "transport_retry_count": 4, 00:21:39.661 "bdev_retry_count": 3, 00:21:39.661 "transport_ack_timeout": 0, 00:21:39.661 "ctrlr_loss_timeout_sec": 0, 00:21:39.661 "reconnect_delay_sec": 0, 00:21:39.661 "fast_io_fail_timeout_sec": 0, 00:21:39.661 "disable_auto_failback": false, 00:21:39.661 "generate_uuids": false, 00:21:39.661 "transport_tos": 0, 00:21:39.661 "nvme_error_stat": false, 00:21:39.662 "rdma_srq_size": 0, 00:21:39.662 "io_path_stat": false, 00:21:39.662 "allow_accel_sequence": false, 00:21:39.662 "rdma_max_cq_size": 0, 00:21:39.662 "rdma_cm_event_timeout_ms": 0, 00:21:39.662 "dhchap_digests": [ 00:21:39.662 "sha256", 00:21:39.662 "sha384", 00:21:39.662 "sha512" 00:21:39.662 ], 00:21:39.662 "dhchap_dhgroups": [ 00:21:39.662 "null", 00:21:39.662 "ffdhe2048", 00:21:39.662 "ffdhe3072", 00:21:39.662 "ffdhe4096", 00:21:39.662 "ffdhe6144", 00:21:39.662 "ffdhe8192" 00:21:39.662 ] 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "bdev_nvme_set_hotplug", 00:21:39.662 "params": { 00:21:39.662 "period_us": 100000, 00:21:39.662 "enable": false 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "bdev_malloc_create", 00:21:39.662 "params": { 00:21:39.662 "name": "malloc0", 00:21:39.662 "num_blocks": 8192, 00:21:39.662 "block_size": 4096, 00:21:39.662 "physical_block_size": 4096, 00:21:39.662 "uuid": "7057423e-0a8b-4953-8979-7875c6d95052", 00:21:39.662 "optimal_io_boundary": 0 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "bdev_wait_for_examine" 00:21:39.662 } 00:21:39.662 ] 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "subsystem": "nbd", 00:21:39.662 "config": [] 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "subsystem": "scheduler", 00:21:39.662 "config": [ 00:21:39.662 { 00:21:39.662 "method": "framework_set_scheduler", 00:21:39.662 "params": { 00:21:39.662 "name": "static" 00:21:39.662 } 00:21:39.662 } 00:21:39.662 ] 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "subsystem": "nvmf", 00:21:39.662 "config": [ 00:21:39.662 { 00:21:39.662 "method": "nvmf_set_config", 00:21:39.662 "params": { 00:21:39.662 "discovery_filter": "match_any", 00:21:39.662 "admin_cmd_passthru": { 00:21:39.662 "identify_ctrlr": false 00:21:39.662 } 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "nvmf_set_max_subsystems", 00:21:39.662 "params": { 00:21:39.662 "max_subsystems": 1024 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "nvmf_set_crdt", 00:21:39.662 "params": { 00:21:39.662 "crdt1": 0, 00:21:39.662 "crdt2": 0, 00:21:39.662 "crdt3": 0 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "nvmf_create_transport", 00:21:39.662 "params": { 00:21:39.662 "trtype": "TCP", 00:21:39.662 "max_queue_depth": 128, 00:21:39.662 "max_io_qpairs_per_ctrlr": 127, 00:21:39.662 "in_capsule_data_size": 4096, 00:21:39.662 "max_io_size": 131072, 00:21:39.662 "io_unit_size": 131072, 00:21:39.662 "max_aq_depth": 128, 00:21:39.662 "num_shared_buffers": 511, 00:21:39.662 "buf_cache_size": 4294967295, 00:21:39.662 "dif_insert_or_strip": false, 00:21:39.662 "zcopy": false, 00:21:39.662 "c2h_success": false, 00:21:39.662 "sock_priority": 0, 00:21:39.662 "abort_timeout_sec": 1, 00:21:39.662 "ack_timeout": 0, 00:21:39.662 "data_wr_pool_size": 0 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "nvmf_create_subsystem", 00:21:39.662 "params": { 00:21:39.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.662 "allow_any_host": false, 00:21:39.662 "serial_number": "SPDK00000000000001", 00:21:39.662 "model_number": "SPDK bdev Controller", 00:21:39.662 "max_namespaces": 10, 00:21:39.662 "min_cntlid": 1, 00:21:39.662 "max_cntlid": 65519, 00:21:39.662 "ana_reporting": false 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "nvmf_subsystem_add_host", 00:21:39.662 "params": { 00:21:39.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.662 "host": "nqn.2016-06.io.spdk:host1", 00:21:39.662 "psk": "/tmp/tmp.Ngu1U6RrRP" 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "nvmf_subsystem_add_ns", 00:21:39.662 "params": { 00:21:39.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.662 "namespace": { 00:21:39.662 "nsid": 1, 00:21:39.662 "bdev_name": "malloc0", 00:21:39.662 "nguid": "7057423E0A8B495389797875C6D95052", 00:21:39.662 "uuid": "7057423e-0a8b-4953-8979-7875c6d95052", 00:21:39.662 "no_auto_visible": false 00:21:39.662 } 00:21:39.662 } 00:21:39.662 }, 00:21:39.662 { 00:21:39.662 "method": "nvmf_subsystem_add_listener", 00:21:39.662 "params": { 00:21:39.662 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.662 "listen_address": { 00:21:39.662 "trtype": "TCP", 00:21:39.662 "adrfam": "IPv4", 00:21:39.662 "traddr": "10.0.0.2", 00:21:39.662 "trsvcid": "4420" 00:21:39.662 }, 00:21:39.662 "secure_channel": true 00:21:39.662 } 00:21:39.662 } 00:21:39.662 ] 00:21:39.662 } 00:21:39.662 ] 00:21:39.662 }' 00:21:39.662 20:35:31 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:39.923 20:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:21:39.923 "subsystems": [ 00:21:39.923 { 00:21:39.923 "subsystem": "keyring", 00:21:39.923 "config": [] 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "subsystem": "iobuf", 00:21:39.923 "config": [ 00:21:39.923 { 00:21:39.923 "method": "iobuf_set_options", 00:21:39.923 "params": { 00:21:39.923 "small_pool_count": 8192, 00:21:39.923 "large_pool_count": 1024, 00:21:39.923 "small_bufsize": 8192, 00:21:39.923 "large_bufsize": 135168 00:21:39.923 } 00:21:39.923 } 00:21:39.923 ] 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "subsystem": "sock", 00:21:39.923 "config": [ 00:21:39.923 { 00:21:39.923 "method": "sock_set_default_impl", 00:21:39.923 "params": { 00:21:39.923 "impl_name": "posix" 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "sock_impl_set_options", 00:21:39.923 "params": { 00:21:39.923 "impl_name": "ssl", 00:21:39.923 "recv_buf_size": 4096, 00:21:39.923 "send_buf_size": 4096, 00:21:39.923 "enable_recv_pipe": true, 00:21:39.923 "enable_quickack": false, 00:21:39.923 "enable_placement_id": 0, 00:21:39.923 "enable_zerocopy_send_server": true, 00:21:39.923 "enable_zerocopy_send_client": false, 00:21:39.923 "zerocopy_threshold": 0, 00:21:39.923 "tls_version": 0, 00:21:39.923 "enable_ktls": false 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "sock_impl_set_options", 00:21:39.923 "params": { 00:21:39.923 "impl_name": "posix", 00:21:39.923 "recv_buf_size": 2097152, 00:21:39.923 "send_buf_size": 2097152, 00:21:39.923 "enable_recv_pipe": true, 00:21:39.923 "enable_quickack": false, 00:21:39.923 "enable_placement_id": 0, 00:21:39.923 "enable_zerocopy_send_server": true, 00:21:39.923 "enable_zerocopy_send_client": false, 00:21:39.923 "zerocopy_threshold": 0, 00:21:39.923 "tls_version": 0, 00:21:39.923 "enable_ktls": false 00:21:39.923 } 00:21:39.923 } 00:21:39.923 ] 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "subsystem": "vmd", 00:21:39.923 "config": [] 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "subsystem": "accel", 00:21:39.923 "config": [ 00:21:39.923 { 00:21:39.923 "method": "accel_set_options", 00:21:39.923 "params": { 00:21:39.923 "small_cache_size": 128, 00:21:39.923 "large_cache_size": 16, 00:21:39.923 "task_count": 2048, 00:21:39.923 "sequence_count": 2048, 00:21:39.923 "buf_count": 2048 00:21:39.923 } 00:21:39.923 } 00:21:39.923 ] 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "subsystem": "bdev", 00:21:39.923 "config": [ 00:21:39.923 { 00:21:39.923 "method": "bdev_set_options", 00:21:39.923 "params": { 00:21:39.923 "bdev_io_pool_size": 65535, 00:21:39.923 "bdev_io_cache_size": 256, 00:21:39.923 "bdev_auto_examine": true, 00:21:39.923 "iobuf_small_cache_size": 128, 00:21:39.923 "iobuf_large_cache_size": 16 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "bdev_raid_set_options", 00:21:39.923 "params": { 00:21:39.923 "process_window_size_kb": 1024 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "bdev_iscsi_set_options", 00:21:39.923 "params": { 00:21:39.923 "timeout_sec": 30 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "bdev_nvme_set_options", 00:21:39.923 "params": { 00:21:39.923 "action_on_timeout": "none", 00:21:39.923 "timeout_us": 0, 00:21:39.923 "timeout_admin_us": 0, 00:21:39.923 "keep_alive_timeout_ms": 10000, 00:21:39.923 "arbitration_burst": 0, 00:21:39.923 "low_priority_weight": 0, 00:21:39.923 "medium_priority_weight": 0, 00:21:39.923 "high_priority_weight": 0, 00:21:39.923 "nvme_adminq_poll_period_us": 10000, 00:21:39.923 "nvme_ioq_poll_period_us": 0, 00:21:39.923 "io_queue_requests": 512, 00:21:39.923 "delay_cmd_submit": true, 00:21:39.923 "transport_retry_count": 4, 00:21:39.923 "bdev_retry_count": 3, 00:21:39.923 "transport_ack_timeout": 0, 00:21:39.923 "ctrlr_loss_timeout_sec": 0, 00:21:39.923 "reconnect_delay_sec": 0, 00:21:39.923 "fast_io_fail_timeout_sec": 0, 00:21:39.923 "disable_auto_failback": false, 00:21:39.923 "generate_uuids": false, 00:21:39.923 "transport_tos": 0, 00:21:39.923 "nvme_error_stat": false, 00:21:39.923 "rdma_srq_size": 0, 00:21:39.923 "io_path_stat": false, 00:21:39.923 "allow_accel_sequence": false, 00:21:39.923 "rdma_max_cq_size": 0, 00:21:39.923 "rdma_cm_event_timeout_ms": 0, 00:21:39.923 "dhchap_digests": [ 00:21:39.923 "sha256", 00:21:39.923 "sha384", 00:21:39.923 "sha512" 00:21:39.923 ], 00:21:39.923 "dhchap_dhgroups": [ 00:21:39.923 "null", 00:21:39.923 "ffdhe2048", 00:21:39.923 "ffdhe3072", 00:21:39.923 "ffdhe4096", 00:21:39.923 "ffdhe6144", 00:21:39.923 "ffdhe8192" 00:21:39.923 ] 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "bdev_nvme_attach_controller", 00:21:39.923 "params": { 00:21:39.923 "name": "TLSTEST", 00:21:39.923 "trtype": "TCP", 00:21:39.923 "adrfam": "IPv4", 00:21:39.923 "traddr": "10.0.0.2", 00:21:39.923 "trsvcid": "4420", 00:21:39.923 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:39.923 "prchk_reftag": false, 00:21:39.923 "prchk_guard": false, 00:21:39.923 "ctrlr_loss_timeout_sec": 0, 00:21:39.923 "reconnect_delay_sec": 0, 00:21:39.923 "fast_io_fail_timeout_sec": 0, 00:21:39.923 "psk": "/tmp/tmp.Ngu1U6RrRP", 00:21:39.923 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:39.923 "hdgst": false, 00:21:39.923 "ddgst": false 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "bdev_nvme_set_hotplug", 00:21:39.923 "params": { 00:21:39.923 "period_us": 100000, 00:21:39.923 "enable": false 00:21:39.923 } 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "method": "bdev_wait_for_examine" 00:21:39.923 } 00:21:39.923 ] 00:21:39.923 }, 00:21:39.923 { 00:21:39.923 "subsystem": "nbd", 00:21:39.923 "config": [] 00:21:39.923 } 00:21:39.923 ] 00:21:39.923 }' 00:21:39.923 20:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 1379071 00:21:39.923 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1379071 ']' 00:21:39.923 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1379071 00:21:39.923 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:39.923 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.924 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1379071 00:21:39.924 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:39.924 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:39.924 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1379071' 00:21:39.924 killing process with pid 1379071 00:21:39.924 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1379071 00:21:39.924 Received shutdown signal, test time was about 10.000000 seconds 00:21:39.924 00:21:39.924 Latency(us) 00:21:39.924 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:39.924 =================================================================================================================== 00:21:39.924 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:39.924 [2024-07-15 20:35:32.256150] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:39.924 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1379071 00:21:40.184 20:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 1378706 00:21:40.184 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1378706 ']' 00:21:40.184 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1378706 00:21:40.184 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1378706 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1378706' 00:21:40.185 killing process with pid 1378706 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1378706 00:21:40.185 [2024-07-15 20:35:32.424280] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1378706 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.185 20:35:32 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:21:40.185 "subsystems": [ 00:21:40.185 { 00:21:40.185 "subsystem": "keyring", 00:21:40.185 "config": [] 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "subsystem": "iobuf", 00:21:40.185 "config": [ 00:21:40.185 { 00:21:40.185 "method": "iobuf_set_options", 00:21:40.185 "params": { 00:21:40.185 "small_pool_count": 8192, 00:21:40.185 "large_pool_count": 1024, 00:21:40.185 "small_bufsize": 8192, 00:21:40.185 "large_bufsize": 135168 00:21:40.185 } 00:21:40.185 } 00:21:40.185 ] 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "subsystem": "sock", 00:21:40.185 "config": [ 00:21:40.185 { 00:21:40.185 "method": "sock_set_default_impl", 00:21:40.185 "params": { 00:21:40.185 "impl_name": "posix" 00:21:40.185 } 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "method": "sock_impl_set_options", 00:21:40.185 "params": { 00:21:40.185 "impl_name": "ssl", 00:21:40.185 "recv_buf_size": 4096, 00:21:40.185 "send_buf_size": 4096, 00:21:40.185 "enable_recv_pipe": true, 00:21:40.185 "enable_quickack": false, 00:21:40.185 "enable_placement_id": 0, 00:21:40.185 "enable_zerocopy_send_server": true, 00:21:40.185 "enable_zerocopy_send_client": false, 00:21:40.185 "zerocopy_threshold": 0, 00:21:40.185 "tls_version": 0, 00:21:40.185 "enable_ktls": false 00:21:40.185 } 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "method": "sock_impl_set_options", 00:21:40.185 "params": { 00:21:40.185 "impl_name": "posix", 00:21:40.185 "recv_buf_size": 2097152, 00:21:40.185 "send_buf_size": 2097152, 00:21:40.185 "enable_recv_pipe": true, 00:21:40.185 "enable_quickack": false, 00:21:40.185 "enable_placement_id": 0, 00:21:40.185 "enable_zerocopy_send_server": true, 00:21:40.185 "enable_zerocopy_send_client": false, 00:21:40.185 "zerocopy_threshold": 0, 00:21:40.185 "tls_version": 0, 00:21:40.185 "enable_ktls": false 00:21:40.185 } 00:21:40.185 } 00:21:40.185 ] 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "subsystem": "vmd", 00:21:40.185 "config": [] 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "subsystem": "accel", 00:21:40.185 "config": [ 00:21:40.185 { 00:21:40.185 "method": "accel_set_options", 00:21:40.185 "params": { 00:21:40.185 "small_cache_size": 128, 00:21:40.185 "large_cache_size": 16, 00:21:40.185 "task_count": 2048, 00:21:40.185 "sequence_count": 2048, 00:21:40.185 "buf_count": 2048 00:21:40.185 } 00:21:40.185 } 00:21:40.185 ] 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "subsystem": "bdev", 00:21:40.185 "config": [ 00:21:40.185 { 00:21:40.185 "method": "bdev_set_options", 00:21:40.185 "params": { 00:21:40.185 "bdev_io_pool_size": 65535, 00:21:40.185 "bdev_io_cache_size": 256, 00:21:40.185 "bdev_auto_examine": true, 00:21:40.185 "iobuf_small_cache_size": 128, 00:21:40.185 "iobuf_large_cache_size": 16 00:21:40.185 } 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "method": "bdev_raid_set_options", 00:21:40.185 "params": { 00:21:40.185 "process_window_size_kb": 1024 00:21:40.185 } 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "method": "bdev_iscsi_set_options", 00:21:40.185 "params": { 00:21:40.185 "timeout_sec": 30 00:21:40.185 } 00:21:40.185 }, 00:21:40.185 { 00:21:40.185 "method": "bdev_nvme_set_options", 00:21:40.185 "params": { 00:21:40.185 "action_on_timeout": "none", 00:21:40.185 "timeout_us": 0, 00:21:40.185 "timeout_admin_us": 0, 00:21:40.185 "keep_alive_timeout_ms": 10000, 00:21:40.185 "arbitration_burst": 0, 00:21:40.185 "low_priority_weight": 0, 00:21:40.185 "medium_priority_weight": 0, 00:21:40.185 "high_priority_weight": 0, 00:21:40.185 "nvme_adminq_poll_period_us": 10000, 00:21:40.185 "nvme_ioq_poll_period_us": 0, 00:21:40.185 "io_queue_requests": 0, 00:21:40.185 "delay_cmd_submit": true, 00:21:40.185 "transport_retry_count": 4, 00:21:40.185 "bdev_retry_count": 3, 00:21:40.185 "transport_ack_timeout": 0, 00:21:40.185 "ctrlr_loss_timeout_sec": 0, 00:21:40.185 "reconnect_delay_sec": 0, 00:21:40.185 "fast_io_fail_timeout_sec": 0, 00:21:40.185 "disable_auto_failback": false, 00:21:40.185 "generate_uuids": false, 00:21:40.185 "transport_tos": 0, 00:21:40.185 "nvme_error_stat": false, 00:21:40.185 "rdma_srq_size": 0, 00:21:40.185 "io_path_stat": false, 00:21:40.185 "allow_accel_sequence": false, 00:21:40.185 "rdma_max_cq_size": 0, 00:21:40.185 "rdma_cm_event_timeout_ms": 0, 00:21:40.185 "dhchap_digests": [ 00:21:40.186 "sha256", 00:21:40.186 "sha384", 00:21:40.186 "sha512" 00:21:40.186 ], 00:21:40.186 "dhchap_dhgroups": [ 00:21:40.186 "null", 00:21:40.186 "ffdhe2048", 00:21:40.186 "ffdhe3072", 00:21:40.186 "ffdhe4096", 00:21:40.186 "ffdhe6144", 00:21:40.186 "ffdhe8192" 00:21:40.186 ] 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "bdev_nvme_set_hotplug", 00:21:40.186 "params": { 00:21:40.186 "period_us": 100000, 00:21:40.186 "enable": false 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "bdev_malloc_create", 00:21:40.186 "params": { 00:21:40.186 "name": "malloc0", 00:21:40.186 "num_blocks": 8192, 00:21:40.186 "block_size": 4096, 00:21:40.186 "physical_block_size": 4096, 00:21:40.186 "uuid": "7057423e-0a8b-4953-8979-7875c6d95052", 00:21:40.186 "optimal_io_boundary": 0 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "bdev_wait_for_examine" 00:21:40.186 } 00:21:40.186 ] 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "subsystem": "nbd", 00:21:40.186 "config": [] 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "subsystem": "scheduler", 00:21:40.186 "config": [ 00:21:40.186 { 00:21:40.186 "method": "framework_set_scheduler", 00:21:40.186 "params": { 00:21:40.186 "name": "static" 00:21:40.186 } 00:21:40.186 } 00:21:40.186 ] 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "subsystem": "nvmf", 00:21:40.186 "config": [ 00:21:40.186 { 00:21:40.186 "method": "nvmf_set_config", 00:21:40.186 "params": { 00:21:40.186 "discovery_filter": "match_any", 00:21:40.186 "admin_cmd_passthru": { 00:21:40.186 "identify_ctrlr": false 00:21:40.186 } 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "nvmf_set_max_subsystems", 00:21:40.186 "params": { 00:21:40.186 "max_subsystems": 1024 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "nvmf_set_crdt", 00:21:40.186 "params": { 00:21:40.186 "crdt1": 0, 00:21:40.186 "crdt2": 0, 00:21:40.186 "crdt3": 0 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "nvmf_create_transport", 00:21:40.186 "params": { 00:21:40.186 "trtype": "TCP", 00:21:40.186 "max_queue_depth": 128, 00:21:40.186 "max_io_qpairs_per_ctrlr": 127, 00:21:40.186 "in_capsule_data_size": 4096, 00:21:40.186 "max_io_size": 131072, 00:21:40.186 "io_unit_size": 131072, 00:21:40.186 "max_aq_depth": 128, 00:21:40.186 "num_shared_buffers": 511, 00:21:40.186 "buf_cache_size": 4294967295, 00:21:40.186 "dif_insert_or_strip": false, 00:21:40.186 "zcopy": false, 00:21:40.186 "c2h_success": false, 00:21:40.186 "sock_priority": 0, 00:21:40.186 "abort_timeout_sec": 1, 00:21:40.186 "ack_timeout": 0, 00:21:40.186 "data_wr_pool_size": 0 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "nvmf_create_subsystem", 00:21:40.186 "params": { 00:21:40.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.186 "allow_any_host": false, 00:21:40.186 "serial_number": "SPDK00000000000001", 00:21:40.186 "model_number": "SPDK bdev Controller", 00:21:40.186 "max_namespaces": 10, 00:21:40.186 "min_cntlid": 1, 00:21:40.186 "max_cntlid": 65519, 00:21:40.186 "ana_reporting": false 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "nvmf_subsystem_add_host", 00:21:40.186 "params": { 00:21:40.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.186 "host": "nqn.2016-06.io.spdk:host1", 00:21:40.186 "psk": "/tmp/tmp.Ngu1U6RrRP" 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "nvmf_subsystem_add_ns", 00:21:40.186 "params": { 00:21:40.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.186 "namespace": { 00:21:40.186 "nsid": 1, 00:21:40.186 "bdev_name": "malloc0", 00:21:40.186 "nguid": "7057423E0A8B495389797875C6D95052", 00:21:40.186 "uuid": "7057423e-0a8b-4953-8979-7875c6d95052", 00:21:40.186 "no_auto_visible": false 00:21:40.186 } 00:21:40.186 } 00:21:40.186 }, 00:21:40.186 { 00:21:40.186 "method": "nvmf_subsystem_add_listener", 00:21:40.186 "params": { 00:21:40.186 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:40.186 "listen_address": { 00:21:40.186 "trtype": "TCP", 00:21:40.186 "adrfam": "IPv4", 00:21:40.186 "traddr": "10.0.0.2", 00:21:40.186 "trsvcid": "4420" 00:21:40.186 }, 00:21:40.186 "secure_channel": true 00:21:40.186 } 00:21:40.186 } 00:21:40.186 ] 00:21:40.186 } 00:21:40.186 ] 00:21:40.186 }' 00:21:40.186 20:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1379299 00:21:40.186 20:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1379299 00:21:40.186 20:35:32 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:40.186 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1379299 ']' 00:21:40.186 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.187 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:40.187 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.187 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.187 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:40.187 20:35:32 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:40.448 [2024-07-15 20:35:32.604035] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:40.448 [2024-07-15 20:35:32.604088] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.448 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.448 [2024-07-15 20:35:32.667148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.448 [2024-07-15 20:35:32.719650] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.448 [2024-07-15 20:35:32.719683] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.448 [2024-07-15 20:35:32.719688] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.448 [2024-07-15 20:35:32.719693] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.448 [2024-07-15 20:35:32.719697] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.448 [2024-07-15 20:35:32.719745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:40.708 [2024-07-15 20:35:32.903504] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:40.708 [2024-07-15 20:35:32.919482] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:40.708 [2024-07-15 20:35:32.935538] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:40.708 [2024-07-15 20:35:32.944396] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=1379450 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 1379450 /var/tmp/bdevperf.sock 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1379450 ']' 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:41.278 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:41.278 20:35:33 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:41.279 20:35:33 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:21:41.279 "subsystems": [ 00:21:41.279 { 00:21:41.279 "subsystem": "keyring", 00:21:41.279 "config": [] 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "subsystem": "iobuf", 00:21:41.279 "config": [ 00:21:41.279 { 00:21:41.279 "method": "iobuf_set_options", 00:21:41.279 "params": { 00:21:41.279 "small_pool_count": 8192, 00:21:41.279 "large_pool_count": 1024, 00:21:41.279 "small_bufsize": 8192, 00:21:41.279 "large_bufsize": 135168 00:21:41.279 } 00:21:41.279 } 00:21:41.279 ] 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "subsystem": "sock", 00:21:41.279 "config": [ 00:21:41.279 { 00:21:41.279 "method": "sock_set_default_impl", 00:21:41.279 "params": { 00:21:41.279 "impl_name": "posix" 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "sock_impl_set_options", 00:21:41.279 "params": { 00:21:41.279 "impl_name": "ssl", 00:21:41.279 "recv_buf_size": 4096, 00:21:41.279 "send_buf_size": 4096, 00:21:41.279 "enable_recv_pipe": true, 00:21:41.279 "enable_quickack": false, 00:21:41.279 "enable_placement_id": 0, 00:21:41.279 "enable_zerocopy_send_server": true, 00:21:41.279 "enable_zerocopy_send_client": false, 00:21:41.279 "zerocopy_threshold": 0, 00:21:41.279 "tls_version": 0, 00:21:41.279 "enable_ktls": false 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "sock_impl_set_options", 00:21:41.279 "params": { 00:21:41.279 "impl_name": "posix", 00:21:41.279 "recv_buf_size": 2097152, 00:21:41.279 "send_buf_size": 2097152, 00:21:41.279 "enable_recv_pipe": true, 00:21:41.279 "enable_quickack": false, 00:21:41.279 "enable_placement_id": 0, 00:21:41.279 "enable_zerocopy_send_server": true, 00:21:41.279 "enable_zerocopy_send_client": false, 00:21:41.279 "zerocopy_threshold": 0, 00:21:41.279 "tls_version": 0, 00:21:41.279 "enable_ktls": false 00:21:41.279 } 00:21:41.279 } 00:21:41.279 ] 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "subsystem": "vmd", 00:21:41.279 "config": [] 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "subsystem": "accel", 00:21:41.279 "config": [ 00:21:41.279 { 00:21:41.279 "method": "accel_set_options", 00:21:41.279 "params": { 00:21:41.279 "small_cache_size": 128, 00:21:41.279 "large_cache_size": 16, 00:21:41.279 "task_count": 2048, 00:21:41.279 "sequence_count": 2048, 00:21:41.279 "buf_count": 2048 00:21:41.279 } 00:21:41.279 } 00:21:41.279 ] 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "subsystem": "bdev", 00:21:41.279 "config": [ 00:21:41.279 { 00:21:41.279 "method": "bdev_set_options", 00:21:41.279 "params": { 00:21:41.279 "bdev_io_pool_size": 65535, 00:21:41.279 "bdev_io_cache_size": 256, 00:21:41.279 "bdev_auto_examine": true, 00:21:41.279 "iobuf_small_cache_size": 128, 00:21:41.279 "iobuf_large_cache_size": 16 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "bdev_raid_set_options", 00:21:41.279 "params": { 00:21:41.279 "process_window_size_kb": 1024 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "bdev_iscsi_set_options", 00:21:41.279 "params": { 00:21:41.279 "timeout_sec": 30 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "bdev_nvme_set_options", 00:21:41.279 "params": { 00:21:41.279 "action_on_timeout": "none", 00:21:41.279 "timeout_us": 0, 00:21:41.279 "timeout_admin_us": 0, 00:21:41.279 "keep_alive_timeout_ms": 10000, 00:21:41.279 "arbitration_burst": 0, 00:21:41.279 "low_priority_weight": 0, 00:21:41.279 "medium_priority_weight": 0, 00:21:41.279 "high_priority_weight": 0, 00:21:41.279 "nvme_adminq_poll_period_us": 10000, 00:21:41.279 "nvme_ioq_poll_period_us": 0, 00:21:41.279 "io_queue_requests": 512, 00:21:41.279 "delay_cmd_submit": true, 00:21:41.279 "transport_retry_count": 4, 00:21:41.279 "bdev_retry_count": 3, 00:21:41.279 "transport_ack_timeout": 0, 00:21:41.279 "ctrlr_loss_timeout_sec": 0, 00:21:41.279 "reconnect_delay_sec": 0, 00:21:41.279 "fast_io_fail_timeout_sec": 0, 00:21:41.279 "disable_auto_failback": false, 00:21:41.279 "generate_uuids": false, 00:21:41.279 "transport_tos": 0, 00:21:41.279 "nvme_error_stat": false, 00:21:41.279 "rdma_srq_size": 0, 00:21:41.279 "io_path_stat": false, 00:21:41.279 "allow_accel_sequence": false, 00:21:41.279 "rdma_max_cq_size": 0, 00:21:41.279 "rdma_cm_event_timeout_ms": 0, 00:21:41.279 "dhchap_digests": [ 00:21:41.279 "sha256", 00:21:41.279 "sha384", 00:21:41.279 "sha512" 00:21:41.279 ], 00:21:41.279 "dhchap_dhgroups": [ 00:21:41.279 "null", 00:21:41.279 "ffdhe2048", 00:21:41.279 "ffdhe3072", 00:21:41.279 "ffdhe4096", 00:21:41.279 "ffdhe6144", 00:21:41.279 "ffdhe8192" 00:21:41.279 ] 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "bdev_nvme_attach_controller", 00:21:41.279 "params": { 00:21:41.279 "name": "TLSTEST", 00:21:41.279 "trtype": "TCP", 00:21:41.279 "adrfam": "IPv4", 00:21:41.279 "traddr": "10.0.0.2", 00:21:41.279 "trsvcid": "4420", 00:21:41.279 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:41.279 "prchk_reftag": false, 00:21:41.279 "prchk_guard": false, 00:21:41.279 "ctrlr_loss_timeout_sec": 0, 00:21:41.279 "reconnect_delay_sec": 0, 00:21:41.279 "fast_io_fail_timeout_sec": 0, 00:21:41.279 "psk": "/tmp/tmp.Ngu1U6RrRP", 00:21:41.279 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:41.279 "hdgst": false, 00:21:41.279 "ddgst": false 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "bdev_nvme_set_hotplug", 00:21:41.279 "params": { 00:21:41.279 "period_us": 100000, 00:21:41.279 "enable": false 00:21:41.279 } 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "method": "bdev_wait_for_examine" 00:21:41.279 } 00:21:41.279 ] 00:21:41.279 }, 00:21:41.279 { 00:21:41.279 "subsystem": "nbd", 00:21:41.279 "config": [] 00:21:41.279 } 00:21:41.279 ] 00:21:41.279 }' 00:21:41.279 [2024-07-15 20:35:33.447865] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:41.279 [2024-07-15 20:35:33.447914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1379450 ] 00:21:41.279 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.280 [2024-07-15 20:35:33.502414] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.280 [2024-07-15 20:35:33.554408] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.540 [2024-07-15 20:35:33.679286] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:41.540 [2024-07-15 20:35:33.679349] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:21:42.110 20:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:42.110 20:35:34 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:42.110 20:35:34 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:42.110 Running I/O for 10 seconds... 00:21:52.109 00:21:52.109 Latency(us) 00:21:52.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.109 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:52.109 Verification LBA range: start 0x0 length 0x2000 00:21:52.109 TLSTESTn1 : 10.05 3150.34 12.31 0.00 0.00 40547.51 6116.69 98304.00 00:21:52.109 =================================================================================================================== 00:21:52.109 Total : 3150.34 12.31 0.00 0.00 40547.51 6116.69 98304.00 00:21:52.109 0 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 1379450 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1379450 ']' 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1379450 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1379450 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1379450' 00:21:52.109 killing process with pid 1379450 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1379450 00:21:52.109 Received shutdown signal, test time was about 10.000000 seconds 00:21:52.109 00:21:52.109 Latency(us) 00:21:52.109 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.109 =================================================================================================================== 00:21:52.109 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.109 [2024-07-15 20:35:44.427026] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:21:52.109 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1379450 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 1379299 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1379299 ']' 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1379299 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1379299 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1379299' 00:21:52.370 killing process with pid 1379299 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1379299 00:21:52.370 [2024-07-15 20:35:44.597091] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1379299 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1381672 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1381672 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1381672 ']' 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:52.370 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:52.370 20:35:44 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:52.370 [2024-07-15 20:35:44.748166] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:52.370 [2024-07-15 20:35:44.748212] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:52.632 EAL: No free 2048 kB hugepages reported on node 1 00:21:52.632 [2024-07-15 20:35:44.809195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:52.632 [2024-07-15 20:35:44.872463] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:52.632 [2024-07-15 20:35:44.872501] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:52.632 [2024-07-15 20:35:44.872508] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:52.632 [2024-07-15 20:35:44.872514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:52.632 [2024-07-15 20:35:44.872520] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:52.632 [2024-07-15 20:35:44.872538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.Ngu1U6RrRP 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.Ngu1U6RrRP 00:21:53.204 20:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:53.465 [2024-07-15 20:35:45.707773] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.465 20:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:53.726 20:35:45 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:53.726 [2024-07-15 20:35:46.040596] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.726 [2024-07-15 20:35:46.040763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.726 20:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:53.988 malloc0 00:21:53.988 20:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.Ngu1U6RrRP 00:21:54.249 [2024-07-15 20:35:46.556714] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=1382113 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 1382113 /var/tmp/bdevperf.sock 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1382113 ']' 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:54.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:54.249 20:35:46 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:54.249 [2024-07-15 20:35:46.624738] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:54.249 [2024-07-15 20:35:46.624789] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382113 ] 00:21:54.510 EAL: No free 2048 kB hugepages reported on node 1 00:21:54.510 [2024-07-15 20:35:46.705321] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:54.510 [2024-07-15 20:35:46.759147] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.082 20:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:55.082 20:35:47 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:55.082 20:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ngu1U6RrRP 00:21:55.343 20:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:55.343 [2024-07-15 20:35:47.685698] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:55.604 nvme0n1 00:21:55.604 20:35:47 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:55.604 Running I/O for 1 seconds... 00:21:56.546 00:21:56.546 Latency(us) 00:21:56.546 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.546 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:56.546 Verification LBA range: start 0x0 length 0x2000 00:21:56.546 nvme0n1 : 1.05 5371.09 20.98 0.00 0.00 23293.14 4505.60 46967.47 00:21:56.546 =================================================================================================================== 00:21:56.546 Total : 5371.09 20.98 0.00 0.00 23293.14 4505.60 46967.47 00:21:56.546 0 00:21:56.546 20:35:48 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 1382113 00:21:56.546 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1382113 ']' 00:21:56.546 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1382113 00:21:56.546 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:56.546 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.546 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1382113 00:21:56.807 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:56.807 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:56.807 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1382113' 00:21:56.807 killing process with pid 1382113 00:21:56.807 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1382113 00:21:56.807 Received shutdown signal, test time was about 1.000000 seconds 00:21:56.807 00:21:56.807 Latency(us) 00:21:56.807 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.807 =================================================================================================================== 00:21:56.807 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.807 20:35:48 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1382113 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 1381672 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1381672 ']' 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1381672 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1381672 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1381672' 00:21:56.807 killing process with pid 1381672 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1381672 00:21:56.807 [2024-07-15 20:35:49.129992] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:21:56.807 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1381672 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1382514 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1382514 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1382514 ']' 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.068 20:35:49 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.068 [2024-07-15 20:35:49.339958] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:57.068 [2024-07-15 20:35:49.340016] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:57.068 EAL: No free 2048 kB hugepages reported on node 1 00:21:57.068 [2024-07-15 20:35:49.412372] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.330 [2024-07-15 20:35:49.476974] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:57.330 [2024-07-15 20:35:49.477010] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:57.330 [2024-07-15 20:35:49.477018] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:57.330 [2024-07-15 20:35:49.477024] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:57.330 [2024-07-15 20:35:49.477030] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:57.330 [2024-07-15 20:35:49.477047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.901 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.901 [2024-07-15 20:35:50.155376] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:57.901 malloc0 00:21:57.902 [2024-07-15 20:35:50.182035] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:57.902 [2024-07-15 20:35:50.182226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=1382856 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 1382856 /var/tmp/bdevperf.sock 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1382856 ']' 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.902 20:35:50 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:57.902 [2024-07-15 20:35:50.258555] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:21:57.902 [2024-07-15 20:35:50.258606] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1382856 ] 00:21:58.162 EAL: No free 2048 kB hugepages reported on node 1 00:21:58.162 [2024-07-15 20:35:50.339492] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.162 [2024-07-15 20:35:50.393345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:58.732 20:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:58.732 20:35:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:21:58.732 20:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ngu1U6RrRP 00:21:58.992 20:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:58.992 [2024-07-15 20:35:51.307795] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.253 nvme0n1 00:21:59.253 20:35:51 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:59.253 Running I/O for 1 seconds... 00:22:00.194 00:22:00.194 Latency(us) 00:22:00.194 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.194 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:00.194 Verification LBA range: start 0x0 length 0x2000 00:22:00.194 nvme0n1 : 1.02 5447.45 21.28 0.00 0.00 23320.53 4532.91 27306.67 00:22:00.194 =================================================================================================================== 00:22:00.194 Total : 5447.45 21.28 0.00 0.00 23320.53 4532.91 27306.67 00:22:00.194 0 00:22:00.194 20:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:22:00.194 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.194 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.454 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.454 20:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:22:00.454 "subsystems": [ 00:22:00.454 { 00:22:00.454 "subsystem": "keyring", 00:22:00.454 "config": [ 00:22:00.454 { 00:22:00.454 "method": "keyring_file_add_key", 00:22:00.454 "params": { 00:22:00.454 "name": "key0", 00:22:00.454 "path": "/tmp/tmp.Ngu1U6RrRP" 00:22:00.455 } 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "iobuf", 00:22:00.455 "config": [ 00:22:00.455 { 00:22:00.455 "method": "iobuf_set_options", 00:22:00.455 "params": { 00:22:00.455 "small_pool_count": 8192, 00:22:00.455 "large_pool_count": 1024, 00:22:00.455 "small_bufsize": 8192, 00:22:00.455 "large_bufsize": 135168 00:22:00.455 } 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "sock", 00:22:00.455 "config": [ 00:22:00.455 { 00:22:00.455 "method": "sock_set_default_impl", 00:22:00.455 "params": { 00:22:00.455 "impl_name": "posix" 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "sock_impl_set_options", 00:22:00.455 "params": { 00:22:00.455 "impl_name": "ssl", 00:22:00.455 "recv_buf_size": 4096, 00:22:00.455 "send_buf_size": 4096, 00:22:00.455 "enable_recv_pipe": true, 00:22:00.455 "enable_quickack": false, 00:22:00.455 "enable_placement_id": 0, 00:22:00.455 "enable_zerocopy_send_server": true, 00:22:00.455 "enable_zerocopy_send_client": false, 00:22:00.455 "zerocopy_threshold": 0, 00:22:00.455 "tls_version": 0, 00:22:00.455 "enable_ktls": false 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "sock_impl_set_options", 00:22:00.455 "params": { 00:22:00.455 "impl_name": "posix", 00:22:00.455 "recv_buf_size": 2097152, 00:22:00.455 "send_buf_size": 2097152, 00:22:00.455 "enable_recv_pipe": true, 00:22:00.455 "enable_quickack": false, 00:22:00.455 "enable_placement_id": 0, 00:22:00.455 "enable_zerocopy_send_server": true, 00:22:00.455 "enable_zerocopy_send_client": false, 00:22:00.455 "zerocopy_threshold": 0, 00:22:00.455 "tls_version": 0, 00:22:00.455 "enable_ktls": false 00:22:00.455 } 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "vmd", 00:22:00.455 "config": [] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "accel", 00:22:00.455 "config": [ 00:22:00.455 { 00:22:00.455 "method": "accel_set_options", 00:22:00.455 "params": { 00:22:00.455 "small_cache_size": 128, 00:22:00.455 "large_cache_size": 16, 00:22:00.455 "task_count": 2048, 00:22:00.455 "sequence_count": 2048, 00:22:00.455 "buf_count": 2048 00:22:00.455 } 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "bdev", 00:22:00.455 "config": [ 00:22:00.455 { 00:22:00.455 "method": "bdev_set_options", 00:22:00.455 "params": { 00:22:00.455 "bdev_io_pool_size": 65535, 00:22:00.455 "bdev_io_cache_size": 256, 00:22:00.455 "bdev_auto_examine": true, 00:22:00.455 "iobuf_small_cache_size": 128, 00:22:00.455 "iobuf_large_cache_size": 16 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "bdev_raid_set_options", 00:22:00.455 "params": { 00:22:00.455 "process_window_size_kb": 1024 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "bdev_iscsi_set_options", 00:22:00.455 "params": { 00:22:00.455 "timeout_sec": 30 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "bdev_nvme_set_options", 00:22:00.455 "params": { 00:22:00.455 "action_on_timeout": "none", 00:22:00.455 "timeout_us": 0, 00:22:00.455 "timeout_admin_us": 0, 00:22:00.455 "keep_alive_timeout_ms": 10000, 00:22:00.455 "arbitration_burst": 0, 00:22:00.455 "low_priority_weight": 0, 00:22:00.455 "medium_priority_weight": 0, 00:22:00.455 "high_priority_weight": 0, 00:22:00.455 "nvme_adminq_poll_period_us": 10000, 00:22:00.455 "nvme_ioq_poll_period_us": 0, 00:22:00.455 "io_queue_requests": 0, 00:22:00.455 "delay_cmd_submit": true, 00:22:00.455 "transport_retry_count": 4, 00:22:00.455 "bdev_retry_count": 3, 00:22:00.455 "transport_ack_timeout": 0, 00:22:00.455 "ctrlr_loss_timeout_sec": 0, 00:22:00.455 "reconnect_delay_sec": 0, 00:22:00.455 "fast_io_fail_timeout_sec": 0, 00:22:00.455 "disable_auto_failback": false, 00:22:00.455 "generate_uuids": false, 00:22:00.455 "transport_tos": 0, 00:22:00.455 "nvme_error_stat": false, 00:22:00.455 "rdma_srq_size": 0, 00:22:00.455 "io_path_stat": false, 00:22:00.455 "allow_accel_sequence": false, 00:22:00.455 "rdma_max_cq_size": 0, 00:22:00.455 "rdma_cm_event_timeout_ms": 0, 00:22:00.455 "dhchap_digests": [ 00:22:00.455 "sha256", 00:22:00.455 "sha384", 00:22:00.455 "sha512" 00:22:00.455 ], 00:22:00.455 "dhchap_dhgroups": [ 00:22:00.455 "null", 00:22:00.455 "ffdhe2048", 00:22:00.455 "ffdhe3072", 00:22:00.455 "ffdhe4096", 00:22:00.455 "ffdhe6144", 00:22:00.455 "ffdhe8192" 00:22:00.455 ] 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "bdev_nvme_set_hotplug", 00:22:00.455 "params": { 00:22:00.455 "period_us": 100000, 00:22:00.455 "enable": false 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "bdev_malloc_create", 00:22:00.455 "params": { 00:22:00.455 "name": "malloc0", 00:22:00.455 "num_blocks": 8192, 00:22:00.455 "block_size": 4096, 00:22:00.455 "physical_block_size": 4096, 00:22:00.455 "uuid": "904a0271-6331-41a8-a222-faaddeacf3be", 00:22:00.455 "optimal_io_boundary": 0 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "bdev_wait_for_examine" 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "nbd", 00:22:00.455 "config": [] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "scheduler", 00:22:00.455 "config": [ 00:22:00.455 { 00:22:00.455 "method": "framework_set_scheduler", 00:22:00.455 "params": { 00:22:00.455 "name": "static" 00:22:00.455 } 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "subsystem": "nvmf", 00:22:00.455 "config": [ 00:22:00.455 { 00:22:00.455 "method": "nvmf_set_config", 00:22:00.455 "params": { 00:22:00.455 "discovery_filter": "match_any", 00:22:00.455 "admin_cmd_passthru": { 00:22:00.455 "identify_ctrlr": false 00:22:00.455 } 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "nvmf_set_max_subsystems", 00:22:00.455 "params": { 00:22:00.455 "max_subsystems": 1024 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "nvmf_set_crdt", 00:22:00.455 "params": { 00:22:00.455 "crdt1": 0, 00:22:00.455 "crdt2": 0, 00:22:00.455 "crdt3": 0 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "nvmf_create_transport", 00:22:00.455 "params": { 00:22:00.455 "trtype": "TCP", 00:22:00.455 "max_queue_depth": 128, 00:22:00.455 "max_io_qpairs_per_ctrlr": 127, 00:22:00.455 "in_capsule_data_size": 4096, 00:22:00.455 "max_io_size": 131072, 00:22:00.455 "io_unit_size": 131072, 00:22:00.455 "max_aq_depth": 128, 00:22:00.455 "num_shared_buffers": 511, 00:22:00.455 "buf_cache_size": 4294967295, 00:22:00.455 "dif_insert_or_strip": false, 00:22:00.455 "zcopy": false, 00:22:00.455 "c2h_success": false, 00:22:00.455 "sock_priority": 0, 00:22:00.455 "abort_timeout_sec": 1, 00:22:00.455 "ack_timeout": 0, 00:22:00.455 "data_wr_pool_size": 0 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "nvmf_create_subsystem", 00:22:00.455 "params": { 00:22:00.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.455 "allow_any_host": false, 00:22:00.455 "serial_number": "00000000000000000000", 00:22:00.455 "model_number": "SPDK bdev Controller", 00:22:00.455 "max_namespaces": 32, 00:22:00.455 "min_cntlid": 1, 00:22:00.455 "max_cntlid": 65519, 00:22:00.455 "ana_reporting": false 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "nvmf_subsystem_add_host", 00:22:00.455 "params": { 00:22:00.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.455 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.455 "psk": "key0" 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "nvmf_subsystem_add_ns", 00:22:00.455 "params": { 00:22:00.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.455 "namespace": { 00:22:00.455 "nsid": 1, 00:22:00.455 "bdev_name": "malloc0", 00:22:00.455 "nguid": "904A0271633141A8A222FAADDEACF3BE", 00:22:00.455 "uuid": "904a0271-6331-41a8-a222-faaddeacf3be", 00:22:00.455 "no_auto_visible": false 00:22:00.455 } 00:22:00.455 } 00:22:00.455 }, 00:22:00.455 { 00:22:00.455 "method": "nvmf_subsystem_add_listener", 00:22:00.455 "params": { 00:22:00.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.455 "listen_address": { 00:22:00.455 "trtype": "TCP", 00:22:00.455 "adrfam": "IPv4", 00:22:00.455 "traddr": "10.0.0.2", 00:22:00.455 "trsvcid": "4420" 00:22:00.455 }, 00:22:00.455 "secure_channel": true 00:22:00.455 } 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 } 00:22:00.455 ] 00:22:00.455 }' 00:22:00.455 20:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:22:00.717 "subsystems": [ 00:22:00.717 { 00:22:00.717 "subsystem": "keyring", 00:22:00.717 "config": [ 00:22:00.717 { 00:22:00.717 "method": "keyring_file_add_key", 00:22:00.717 "params": { 00:22:00.717 "name": "key0", 00:22:00.717 "path": "/tmp/tmp.Ngu1U6RrRP" 00:22:00.717 } 00:22:00.717 } 00:22:00.717 ] 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "subsystem": "iobuf", 00:22:00.717 "config": [ 00:22:00.717 { 00:22:00.717 "method": "iobuf_set_options", 00:22:00.717 "params": { 00:22:00.717 "small_pool_count": 8192, 00:22:00.717 "large_pool_count": 1024, 00:22:00.717 "small_bufsize": 8192, 00:22:00.717 "large_bufsize": 135168 00:22:00.717 } 00:22:00.717 } 00:22:00.717 ] 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "subsystem": "sock", 00:22:00.717 "config": [ 00:22:00.717 { 00:22:00.717 "method": "sock_set_default_impl", 00:22:00.717 "params": { 00:22:00.717 "impl_name": "posix" 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "sock_impl_set_options", 00:22:00.717 "params": { 00:22:00.717 "impl_name": "ssl", 00:22:00.717 "recv_buf_size": 4096, 00:22:00.717 "send_buf_size": 4096, 00:22:00.717 "enable_recv_pipe": true, 00:22:00.717 "enable_quickack": false, 00:22:00.717 "enable_placement_id": 0, 00:22:00.717 "enable_zerocopy_send_server": true, 00:22:00.717 "enable_zerocopy_send_client": false, 00:22:00.717 "zerocopy_threshold": 0, 00:22:00.717 "tls_version": 0, 00:22:00.717 "enable_ktls": false 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "sock_impl_set_options", 00:22:00.717 "params": { 00:22:00.717 "impl_name": "posix", 00:22:00.717 "recv_buf_size": 2097152, 00:22:00.717 "send_buf_size": 2097152, 00:22:00.717 "enable_recv_pipe": true, 00:22:00.717 "enable_quickack": false, 00:22:00.717 "enable_placement_id": 0, 00:22:00.717 "enable_zerocopy_send_server": true, 00:22:00.717 "enable_zerocopy_send_client": false, 00:22:00.717 "zerocopy_threshold": 0, 00:22:00.717 "tls_version": 0, 00:22:00.717 "enable_ktls": false 00:22:00.717 } 00:22:00.717 } 00:22:00.717 ] 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "subsystem": "vmd", 00:22:00.717 "config": [] 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "subsystem": "accel", 00:22:00.717 "config": [ 00:22:00.717 { 00:22:00.717 "method": "accel_set_options", 00:22:00.717 "params": { 00:22:00.717 "small_cache_size": 128, 00:22:00.717 "large_cache_size": 16, 00:22:00.717 "task_count": 2048, 00:22:00.717 "sequence_count": 2048, 00:22:00.717 "buf_count": 2048 00:22:00.717 } 00:22:00.717 } 00:22:00.717 ] 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "subsystem": "bdev", 00:22:00.717 "config": [ 00:22:00.717 { 00:22:00.717 "method": "bdev_set_options", 00:22:00.717 "params": { 00:22:00.717 "bdev_io_pool_size": 65535, 00:22:00.717 "bdev_io_cache_size": 256, 00:22:00.717 "bdev_auto_examine": true, 00:22:00.717 "iobuf_small_cache_size": 128, 00:22:00.717 "iobuf_large_cache_size": 16 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "bdev_raid_set_options", 00:22:00.717 "params": { 00:22:00.717 "process_window_size_kb": 1024 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "bdev_iscsi_set_options", 00:22:00.717 "params": { 00:22:00.717 "timeout_sec": 30 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "bdev_nvme_set_options", 00:22:00.717 "params": { 00:22:00.717 "action_on_timeout": "none", 00:22:00.717 "timeout_us": 0, 00:22:00.717 "timeout_admin_us": 0, 00:22:00.717 "keep_alive_timeout_ms": 10000, 00:22:00.717 "arbitration_burst": 0, 00:22:00.717 "low_priority_weight": 0, 00:22:00.717 "medium_priority_weight": 0, 00:22:00.717 "high_priority_weight": 0, 00:22:00.717 "nvme_adminq_poll_period_us": 10000, 00:22:00.717 "nvme_ioq_poll_period_us": 0, 00:22:00.717 "io_queue_requests": 512, 00:22:00.717 "delay_cmd_submit": true, 00:22:00.717 "transport_retry_count": 4, 00:22:00.717 "bdev_retry_count": 3, 00:22:00.717 "transport_ack_timeout": 0, 00:22:00.717 "ctrlr_loss_timeout_sec": 0, 00:22:00.717 "reconnect_delay_sec": 0, 00:22:00.717 "fast_io_fail_timeout_sec": 0, 00:22:00.717 "disable_auto_failback": false, 00:22:00.717 "generate_uuids": false, 00:22:00.717 "transport_tos": 0, 00:22:00.717 "nvme_error_stat": false, 00:22:00.717 "rdma_srq_size": 0, 00:22:00.717 "io_path_stat": false, 00:22:00.717 "allow_accel_sequence": false, 00:22:00.717 "rdma_max_cq_size": 0, 00:22:00.717 "rdma_cm_event_timeout_ms": 0, 00:22:00.717 "dhchap_digests": [ 00:22:00.717 "sha256", 00:22:00.717 "sha384", 00:22:00.717 "sha512" 00:22:00.717 ], 00:22:00.717 "dhchap_dhgroups": [ 00:22:00.717 "null", 00:22:00.717 "ffdhe2048", 00:22:00.717 "ffdhe3072", 00:22:00.717 "ffdhe4096", 00:22:00.717 "ffdhe6144", 00:22:00.717 "ffdhe8192" 00:22:00.717 ] 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "bdev_nvme_attach_controller", 00:22:00.717 "params": { 00:22:00.717 "name": "nvme0", 00:22:00.717 "trtype": "TCP", 00:22:00.717 "adrfam": "IPv4", 00:22:00.717 "traddr": "10.0.0.2", 00:22:00.717 "trsvcid": "4420", 00:22:00.717 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.717 "prchk_reftag": false, 00:22:00.717 "prchk_guard": false, 00:22:00.717 "ctrlr_loss_timeout_sec": 0, 00:22:00.717 "reconnect_delay_sec": 0, 00:22:00.717 "fast_io_fail_timeout_sec": 0, 00:22:00.717 "psk": "key0", 00:22:00.717 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:00.717 "hdgst": false, 00:22:00.717 "ddgst": false 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "bdev_nvme_set_hotplug", 00:22:00.717 "params": { 00:22:00.717 "period_us": 100000, 00:22:00.717 "enable": false 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "bdev_enable_histogram", 00:22:00.717 "params": { 00:22:00.717 "name": "nvme0n1", 00:22:00.717 "enable": true 00:22:00.717 } 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "method": "bdev_wait_for_examine" 00:22:00.717 } 00:22:00.717 ] 00:22:00.717 }, 00:22:00.717 { 00:22:00.717 "subsystem": "nbd", 00:22:00.717 "config": [] 00:22:00.717 } 00:22:00.717 ] 00:22:00.717 }' 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 1382856 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1382856 ']' 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1382856 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1382856 00:22:00.717 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:00.718 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:00.718 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1382856' 00:22:00.718 killing process with pid 1382856 00:22:00.718 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1382856 00:22:00.718 Received shutdown signal, test time was about 1.000000 seconds 00:22:00.718 00:22:00.718 Latency(us) 00:22:00.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:00.718 =================================================================================================================== 00:22:00.718 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:00.718 20:35:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1382856 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 1382514 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1382514 ']' 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1382514 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1382514 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1382514' 00:22:00.718 killing process with pid 1382514 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1382514 00:22:00.718 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1382514 00:22:00.978 20:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:22:00.978 20:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:00.978 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:00.978 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.978 20:35:53 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:22:00.978 "subsystems": [ 00:22:00.978 { 00:22:00.978 "subsystem": "keyring", 00:22:00.978 "config": [ 00:22:00.978 { 00:22:00.978 "method": "keyring_file_add_key", 00:22:00.978 "params": { 00:22:00.978 "name": "key0", 00:22:00.978 "path": "/tmp/tmp.Ngu1U6RrRP" 00:22:00.978 } 00:22:00.978 } 00:22:00.978 ] 00:22:00.978 }, 00:22:00.979 { 00:22:00.979 "subsystem": "iobuf", 00:22:00.979 "config": [ 00:22:00.979 { 00:22:00.979 "method": "iobuf_set_options", 00:22:00.979 "params": { 00:22:00.979 "small_pool_count": 8192, 00:22:00.979 "large_pool_count": 1024, 00:22:00.979 "small_bufsize": 8192, 00:22:00.979 "large_bufsize": 135168 00:22:00.979 } 00:22:00.979 } 00:22:00.979 ] 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "subsystem": "sock", 00:22:00.979 "config": [ 00:22:00.979 { 00:22:00.979 "method": "sock_set_default_impl", 00:22:00.979 "params": { 00:22:00.979 "impl_name": "posix" 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "sock_impl_set_options", 00:22:00.979 "params": { 00:22:00.979 "impl_name": "ssl", 00:22:00.979 "recv_buf_size": 4096, 00:22:00.979 "send_buf_size": 4096, 00:22:00.979 "enable_recv_pipe": true, 00:22:00.979 "enable_quickack": false, 00:22:00.979 "enable_placement_id": 0, 00:22:00.979 "enable_zerocopy_send_server": true, 00:22:00.979 "enable_zerocopy_send_client": false, 00:22:00.979 "zerocopy_threshold": 0, 00:22:00.979 "tls_version": 0, 00:22:00.979 "enable_ktls": false 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "sock_impl_set_options", 00:22:00.979 "params": { 00:22:00.979 "impl_name": "posix", 00:22:00.979 "recv_buf_size": 2097152, 00:22:00.979 "send_buf_size": 2097152, 00:22:00.979 "enable_recv_pipe": true, 00:22:00.979 "enable_quickack": false, 00:22:00.979 "enable_placement_id": 0, 00:22:00.979 "enable_zerocopy_send_server": true, 00:22:00.979 "enable_zerocopy_send_client": false, 00:22:00.979 "zerocopy_threshold": 0, 00:22:00.979 "tls_version": 0, 00:22:00.979 "enable_ktls": false 00:22:00.979 } 00:22:00.979 } 00:22:00.979 ] 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "subsystem": "vmd", 00:22:00.979 "config": [] 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "subsystem": "accel", 00:22:00.979 "config": [ 00:22:00.979 { 00:22:00.979 "method": "accel_set_options", 00:22:00.979 "params": { 00:22:00.979 "small_cache_size": 128, 00:22:00.979 "large_cache_size": 16, 00:22:00.979 "task_count": 2048, 00:22:00.979 "sequence_count": 2048, 00:22:00.979 "buf_count": 2048 00:22:00.979 } 00:22:00.979 } 00:22:00.979 ] 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "subsystem": "bdev", 00:22:00.979 "config": [ 00:22:00.979 { 00:22:00.979 "method": "bdev_set_options", 00:22:00.979 "params": { 00:22:00.979 "bdev_io_pool_size": 65535, 00:22:00.979 "bdev_io_cache_size": 256, 00:22:00.979 "bdev_auto_examine": true, 00:22:00.979 "iobuf_small_cache_size": 128, 00:22:00.979 "iobuf_large_cache_size": 16 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "bdev_raid_set_options", 00:22:00.979 "params": { 00:22:00.979 "process_window_size_kb": 1024 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "bdev_iscsi_set_options", 00:22:00.979 "params": { 00:22:00.979 "timeout_sec": 30 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "bdev_nvme_set_options", 00:22:00.979 "params": { 00:22:00.979 "action_on_timeout": "none", 00:22:00.979 "timeout_us": 0, 00:22:00.979 "timeout_admin_us": 0, 00:22:00.979 "keep_alive_timeout_ms": 10000, 00:22:00.979 "arbitration_burst": 0, 00:22:00.979 "low_priority_weight": 0, 00:22:00.979 "medium_priority_weight": 0, 00:22:00.979 "high_priority_weight": 0, 00:22:00.979 "nvme_adminq_poll_period_us": 10000, 00:22:00.979 "nvme_ioq_poll_period_us": 0, 00:22:00.979 "io_queue_requests": 0, 00:22:00.979 "delay_cmd_submit": true, 00:22:00.979 "transport_retry_count": 4, 00:22:00.979 "bdev_retry_count": 3, 00:22:00.979 "transport_ack_timeout": 0, 00:22:00.979 "ctrlr_loss_timeout_sec": 0, 00:22:00.979 "reconnect_delay_sec": 0, 00:22:00.979 "fast_io_fail_timeout_sec": 0, 00:22:00.979 "disable_auto_failback": false, 00:22:00.979 "generate_uuids": false, 00:22:00.979 "transport_tos": 0, 00:22:00.979 "nvme_error_stat": false, 00:22:00.979 "rdma_srq_size": 0, 00:22:00.979 "io_path_stat": false, 00:22:00.979 "allow_accel_sequence": false, 00:22:00.979 "rdma_max_cq_size": 0, 00:22:00.979 "rdma_cm_event_timeout_ms": 0, 00:22:00.979 "dhchap_digests": [ 00:22:00.979 "sha256", 00:22:00.979 "sha384", 00:22:00.979 "sha512" 00:22:00.979 ], 00:22:00.979 "dhchap_dhgroups": [ 00:22:00.979 "null", 00:22:00.979 "ffdhe2048", 00:22:00.979 "ffdhe3072", 00:22:00.979 "ffdhe4096", 00:22:00.979 "ffdhe6144", 00:22:00.979 "ffdhe8192" 00:22:00.979 ] 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "bdev_nvme_set_hotplug", 00:22:00.979 "params": { 00:22:00.979 "period_us": 100000, 00:22:00.979 "enable": false 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "bdev_malloc_create", 00:22:00.979 "params": { 00:22:00.979 "name": "malloc0", 00:22:00.979 "num_blocks": 8192, 00:22:00.979 "block_size": 4096, 00:22:00.979 "physical_block_size": 4096, 00:22:00.979 "uuid": "904a0271-6331-41a8-a222-faaddeacf3be", 00:22:00.979 "optimal_io_boundary": 0 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "bdev_wait_for_examine" 00:22:00.979 } 00:22:00.979 ] 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "subsystem": "nbd", 00:22:00.979 "config": [] 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "subsystem": "scheduler", 00:22:00.979 "config": [ 00:22:00.979 { 00:22:00.979 "method": "framework_set_scheduler", 00:22:00.979 "params": { 00:22:00.979 "name": "static" 00:22:00.979 } 00:22:00.979 } 00:22:00.979 ] 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "subsystem": "nvmf", 00:22:00.979 "config": [ 00:22:00.979 { 00:22:00.979 "method": "nvmf_set_config", 00:22:00.979 "params": { 00:22:00.979 "discovery_filter": "match_any", 00:22:00.979 "admin_cmd_passthru": { 00:22:00.979 "identify_ctrlr": false 00:22:00.979 } 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "nvmf_set_max_subsystems", 00:22:00.979 "params": { 00:22:00.979 "max_subsystems": 1024 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "nvmf_set_crdt", 00:22:00.979 "params": { 00:22:00.979 "crdt1": 0, 00:22:00.979 "crdt2": 0, 00:22:00.979 "crdt3": 0 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "nvmf_create_transport", 00:22:00.979 "params": { 00:22:00.979 "trtype": "TCP", 00:22:00.979 "max_queue_depth": 128, 00:22:00.979 "max_io_qpairs_per_ctrlr": 127, 00:22:00.979 "in_capsule_data_size": 4096, 00:22:00.979 "max_io_size": 131072, 00:22:00.979 "io_unit_size": 131072, 00:22:00.979 "max_aq_depth": 128, 00:22:00.979 "num_shared_buffers": 511, 00:22:00.979 "buf_cache_size": 4294967295, 00:22:00.979 "dif_insert_or_strip": false, 00:22:00.979 "zcopy": false, 00:22:00.979 "c2h_success": false, 00:22:00.979 "sock_priority": 0, 00:22:00.979 "abort_timeout_sec": 1, 00:22:00.979 "ack_timeout": 0, 00:22:00.979 "data_wr_pool_size": 0 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "nvmf_create_subsystem", 00:22:00.979 "params": { 00:22:00.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.979 "allow_any_host": false, 00:22:00.979 "serial_number": "00000000000000000000", 00:22:00.979 "model_number": "SPDK bdev Controller", 00:22:00.979 "max_namespaces": 32, 00:22:00.979 "min_cntlid": 1, 00:22:00.979 "max_cntlid": 65519, 00:22:00.979 "ana_reporting": false 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "nvmf_subsystem_add_host", 00:22:00.979 "params": { 00:22:00.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.979 "host": "nqn.2016-06.io.spdk:host1", 00:22:00.979 "psk": "key0" 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "nvmf_subsystem_add_ns", 00:22:00.979 "params": { 00:22:00.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.979 "namespace": { 00:22:00.979 "nsid": 1, 00:22:00.979 "bdev_name": "malloc0", 00:22:00.979 "nguid": "904A0271633141A8A222FAADDEACF3BE", 00:22:00.979 "uuid": "904a0271-6331-41a8-a222-faaddeacf3be", 00:22:00.979 "no_auto_visible": false 00:22:00.979 } 00:22:00.979 } 00:22:00.979 }, 00:22:00.979 { 00:22:00.979 "method": "nvmf_subsystem_add_listener", 00:22:00.979 "params": { 00:22:00.979 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:00.979 "listen_address": { 00:22:00.979 "trtype": "TCP", 00:22:00.979 "adrfam": "IPv4", 00:22:00.979 "traddr": "10.0.0.2", 00:22:00.979 "trsvcid": "4420" 00:22:00.979 }, 00:22:00.979 "secure_channel": true 00:22:00.979 } 00:22:00.979 } 00:22:00.979 ] 00:22:00.979 } 00:22:00.979 ] 00:22:00.979 }' 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=1383394 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 1383394 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1383394 ']' 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:00.979 20:35:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.979 [2024-07-15 20:35:53.284354] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:22:00.979 [2024-07-15 20:35:53.284411] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.979 EAL: No free 2048 kB hugepages reported on node 1 00:22:00.979 [2024-07-15 20:35:53.356866] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:01.239 [2024-07-15 20:35:53.421825] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:01.239 [2024-07-15 20:35:53.421862] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:01.239 [2024-07-15 20:35:53.421873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:01.239 [2024-07-15 20:35:53.421879] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:01.239 [2024-07-15 20:35:53.421885] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:01.239 [2024-07-15 20:35:53.421941] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:01.239 [2024-07-15 20:35:53.619072] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:01.498 [2024-07-15 20:35:53.651076] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.498 [2024-07-15 20:35:53.671382] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=1383571 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 1383571 /var/tmp/bdevperf.sock 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 1383571 ']' 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:01.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:01.758 20:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:22:01.758 "subsystems": [ 00:22:01.758 { 00:22:01.758 "subsystem": "keyring", 00:22:01.758 "config": [ 00:22:01.758 { 00:22:01.758 "method": "keyring_file_add_key", 00:22:01.758 "params": { 00:22:01.758 "name": "key0", 00:22:01.758 "path": "/tmp/tmp.Ngu1U6RrRP" 00:22:01.758 } 00:22:01.758 } 00:22:01.758 ] 00:22:01.758 }, 00:22:01.758 { 00:22:01.758 "subsystem": "iobuf", 00:22:01.758 "config": [ 00:22:01.758 { 00:22:01.758 "method": "iobuf_set_options", 00:22:01.758 "params": { 00:22:01.758 "small_pool_count": 8192, 00:22:01.758 "large_pool_count": 1024, 00:22:01.758 "small_bufsize": 8192, 00:22:01.758 "large_bufsize": 135168 00:22:01.758 } 00:22:01.758 } 00:22:01.758 ] 00:22:01.758 }, 00:22:01.758 { 00:22:01.758 "subsystem": "sock", 00:22:01.758 "config": [ 00:22:01.758 { 00:22:01.758 "method": "sock_set_default_impl", 00:22:01.758 "params": { 00:22:01.758 "impl_name": "posix" 00:22:01.758 } 00:22:01.758 }, 00:22:01.758 { 00:22:01.758 "method": "sock_impl_set_options", 00:22:01.758 "params": { 00:22:01.758 "impl_name": "ssl", 00:22:01.758 "recv_buf_size": 4096, 00:22:01.758 "send_buf_size": 4096, 00:22:01.758 "enable_recv_pipe": true, 00:22:01.758 "enable_quickack": false, 00:22:01.758 "enable_placement_id": 0, 00:22:01.758 "enable_zerocopy_send_server": true, 00:22:01.758 "enable_zerocopy_send_client": false, 00:22:01.758 "zerocopy_threshold": 0, 00:22:01.758 "tls_version": 0, 00:22:01.758 "enable_ktls": false 00:22:01.758 } 00:22:01.758 }, 00:22:01.758 { 00:22:01.758 "method": "sock_impl_set_options", 00:22:01.758 "params": { 00:22:01.758 "impl_name": "posix", 00:22:01.758 "recv_buf_size": 2097152, 00:22:01.758 "send_buf_size": 2097152, 00:22:01.758 "enable_recv_pipe": true, 00:22:01.758 "enable_quickack": false, 00:22:01.758 "enable_placement_id": 0, 00:22:01.758 "enable_zerocopy_send_server": true, 00:22:01.758 "enable_zerocopy_send_client": false, 00:22:01.758 "zerocopy_threshold": 0, 00:22:01.758 "tls_version": 0, 00:22:01.758 "enable_ktls": false 00:22:01.758 } 00:22:01.758 } 00:22:01.758 ] 00:22:01.758 }, 00:22:01.758 { 00:22:01.758 "subsystem": "vmd", 00:22:01.758 "config": [] 00:22:01.758 }, 00:22:01.758 { 00:22:01.758 "subsystem": "accel", 00:22:01.758 "config": [ 00:22:01.758 { 00:22:01.758 "method": "accel_set_options", 00:22:01.758 "params": { 00:22:01.758 "small_cache_size": 128, 00:22:01.758 "large_cache_size": 16, 00:22:01.758 "task_count": 2048, 00:22:01.758 "sequence_count": 2048, 00:22:01.758 "buf_count": 2048 00:22:01.758 } 00:22:01.758 } 00:22:01.758 ] 00:22:01.758 }, 00:22:01.758 { 00:22:01.758 "subsystem": "bdev", 00:22:01.758 "config": [ 00:22:01.758 { 00:22:01.759 "method": "bdev_set_options", 00:22:01.759 "params": { 00:22:01.759 "bdev_io_pool_size": 65535, 00:22:01.759 "bdev_io_cache_size": 256, 00:22:01.759 "bdev_auto_examine": true, 00:22:01.759 "iobuf_small_cache_size": 128, 00:22:01.759 "iobuf_large_cache_size": 16 00:22:01.759 } 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "method": "bdev_raid_set_options", 00:22:01.759 "params": { 00:22:01.759 "process_window_size_kb": 1024 00:22:01.759 } 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "method": "bdev_iscsi_set_options", 00:22:01.759 "params": { 00:22:01.759 "timeout_sec": 30 00:22:01.759 } 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "method": "bdev_nvme_set_options", 00:22:01.759 "params": { 00:22:01.759 "action_on_timeout": "none", 00:22:01.759 "timeout_us": 0, 00:22:01.759 "timeout_admin_us": 0, 00:22:01.759 "keep_alive_timeout_ms": 10000, 00:22:01.759 "arbitration_burst": 0, 00:22:01.759 "low_priority_weight": 0, 00:22:01.759 "medium_priority_weight": 0, 00:22:01.759 "high_priority_weight": 0, 00:22:01.759 "nvme_adminq_poll_period_us": 10000, 00:22:01.759 "nvme_ioq_poll_period_us": 0, 00:22:01.759 "io_queue_requests": 512, 00:22:01.759 "delay_cmd_submit": true, 00:22:01.759 "transport_retry_count": 4, 00:22:01.759 "bdev_retry_count": 3, 00:22:01.759 "transport_ack_timeout": 0, 00:22:01.759 "ctrlr_loss_timeout_sec": 0, 00:22:01.759 "reconnect_delay_sec": 0, 00:22:01.759 "fast_io_fail_timeout_sec": 0, 00:22:01.759 "disable_auto_failback": false, 00:22:01.759 "generate_uuids": false, 00:22:01.759 "transport_tos": 0, 00:22:01.759 "nvme_error_stat": false, 00:22:01.759 "rdma_srq_size": 0, 00:22:01.759 "io_path_stat": false, 00:22:01.759 "allow_accel_sequence": false, 00:22:01.759 "rdma_max_cq_size": 0, 00:22:01.759 "rdma_cm_event_timeout_ms": 0, 00:22:01.759 "dhchap_digests": [ 00:22:01.759 "sha256", 00:22:01.759 "sha384", 00:22:01.759 "sha512" 00:22:01.759 ], 00:22:01.759 "dhchap_dhgroups": [ 00:22:01.759 "null", 00:22:01.759 "ffdhe2048", 00:22:01.759 "ffdhe3072", 00:22:01.759 "ffdhe4096", 00:22:01.759 "ffdhe6144", 00:22:01.759 "ffdhe8192" 00:22:01.759 ] 00:22:01.759 } 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "method": "bdev_nvme_attach_controller", 00:22:01.759 "params": { 00:22:01.759 "name": "nvme0", 00:22:01.759 "trtype": "TCP", 00:22:01.759 "adrfam": "IPv4", 00:22:01.759 "traddr": "10.0.0.2", 00:22:01.759 "trsvcid": "4420", 00:22:01.759 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:01.759 "prchk_reftag": false, 00:22:01.759 "prchk_guard": false, 00:22:01.759 "ctrlr_loss_timeout_sec": 0, 00:22:01.759 "reconnect_delay_sec": 0, 00:22:01.759 "fast_io_fail_timeout_sec": 0, 00:22:01.759 "psk": "key0", 00:22:01.759 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:01.759 "hdgst": false, 00:22:01.759 "ddgst": false 00:22:01.759 } 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "method": "bdev_nvme_set_hotplug", 00:22:01.759 "params": { 00:22:01.759 "period_us": 100000, 00:22:01.759 "enable": false 00:22:01.759 } 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "method": "bdev_enable_histogram", 00:22:01.759 "params": { 00:22:01.759 "name": "nvme0n1", 00:22:01.759 "enable": true 00:22:01.759 } 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "method": "bdev_wait_for_examine" 00:22:01.759 } 00:22:01.759 ] 00:22:01.759 }, 00:22:01.759 { 00:22:01.759 "subsystem": "nbd", 00:22:01.759 "config": [] 00:22:01.759 } 00:22:01.759 ] 00:22:01.759 }' 00:22:01.759 [2024-07-15 20:35:54.129534] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:22:01.759 [2024-07-15 20:35:54.129583] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1383571 ] 00:22:02.018 EAL: No free 2048 kB hugepages reported on node 1 00:22:02.018 [2024-07-15 20:35:54.210188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.018 [2024-07-15 20:35:54.263950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:02.018 [2024-07-15 20:35:54.397974] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:02.587 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:02.587 20:35:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:02.587 20:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:02.587 20:35:54 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:22:02.846 20:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.846 20:35:55 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:02.846 Running I/O for 1 seconds... 00:22:03.784 00:22:03.784 Latency(us) 00:22:03.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:03.784 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:03.784 Verification LBA range: start 0x0 length 0x2000 00:22:03.784 nvme0n1 : 1.02 5151.05 20.12 0.00 0.00 24636.93 4778.67 62040.75 00:22:03.784 =================================================================================================================== 00:22:03.784 Total : 5151.05 20.12 0.00 0.00 24636.93 4778.67 62040.75 00:22:03.784 0 00:22:03.784 20:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:04.044 nvmf_trace.0 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 1383571 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1383571 ']' 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1383571 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1383571 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1383571' 00:22:04.044 killing process with pid 1383571 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1383571 00:22:04.044 Received shutdown signal, test time was about 1.000000 seconds 00:22:04.044 00:22:04.044 Latency(us) 00:22:04.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:04.044 =================================================================================================================== 00:22:04.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1383571 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:04.044 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:04.304 rmmod nvme_tcp 00:22:04.304 rmmod nvme_fabrics 00:22:04.304 rmmod nvme_keyring 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 1383394 ']' 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 1383394 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 1383394 ']' 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 1383394 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1383394 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:04.304 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:04.305 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1383394' 00:22:04.305 killing process with pid 1383394 00:22:04.305 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 1383394 00:22:04.305 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 1383394 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:04.565 20:35:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:06.478 20:35:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:06.478 20:35:58 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.bGEqRH2nDw /tmp/tmp.XH7o1SiKFF /tmp/tmp.Ngu1U6RrRP 00:22:06.478 00:22:06.478 real 1m23.198s 00:22:06.478 user 2m6.703s 00:22:06.478 sys 0m27.109s 00:22:06.478 20:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:06.478 20:35:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:06.478 ************************************ 00:22:06.478 END TEST nvmf_tls 00:22:06.478 ************************************ 00:22:06.478 20:35:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:06.478 20:35:58 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:06.478 20:35:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:06.478 20:35:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:06.478 20:35:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:06.478 ************************************ 00:22:06.478 START TEST nvmf_fips 00:22:06.478 ************************************ 00:22:06.478 20:35:58 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:06.740 * Looking for test storage... 00:22:06.740 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:22:06.740 20:35:58 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:06.740 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:22:06.741 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:22:07.002 Error setting digest 00:22:07.002 00C20268DF7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:22:07.002 00C20268DF7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:22:07.002 20:35:59 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:15.139 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:15.139 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:15.139 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:15.140 Found net devices under 0000:31:00.0: cvl_0_0 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:15.140 Found net devices under 0000:31:00.1: cvl_0_1 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:15.140 20:36:06 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:15.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:15.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.658 ms 00:22:15.140 00:22:15.140 --- 10.0.0.2 ping statistics --- 00:22:15.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.140 rtt min/avg/max/mdev = 0.658/0.658/0.658/0.000 ms 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:15.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:15.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:22:15.140 00:22:15.140 --- 10.0.0.1 ping statistics --- 00:22:15.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:15.140 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=1388844 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 1388844 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1388844 ']' 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:15.140 20:36:07 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:15.140 [2024-07-15 20:36:07.440995] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:22:15.140 [2024-07-15 20:36:07.441067] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.140 EAL: No free 2048 kB hugepages reported on node 1 00:22:15.400 [2024-07-15 20:36:07.539392] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.400 [2024-07-15 20:36:07.628538] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.400 [2024-07-15 20:36:07.628597] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.401 [2024-07-15 20:36:07.628607] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.401 [2024-07-15 20:36:07.628614] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.401 [2024-07-15 20:36:07.628620] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.401 [2024-07-15 20:36:07.628650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:15.971 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:16.230 [2024-07-15 20:36:08.412500] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:16.230 [2024-07-15 20:36:08.428489] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.230 [2024-07-15 20:36:08.428781] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.230 [2024-07-15 20:36:08.458685] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:16.230 malloc0 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=1389181 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 1389181 /var/tmp/bdevperf.sock 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 1389181 ']' 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:16.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.230 20:36:08 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:16.230 [2024-07-15 20:36:08.562904] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:22:16.230 [2024-07-15 20:36:08.562985] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1389181 ] 00:22:16.230 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.490 [2024-07-15 20:36:08.628335] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.490 [2024-07-15 20:36:08.693600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:17.059 20:36:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.059 20:36:09 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:22:17.059 20:36:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:17.318 [2024-07-15 20:36:09.461842] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.318 [2024-07-15 20:36:09.461913] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:17.318 TLSTESTn1 00:22:17.318 20:36:09 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:17.318 Running I/O for 10 seconds... 00:22:27.449 00:22:27.449 Latency(us) 00:22:27.449 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.449 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:27.449 Verification LBA range: start 0x0 length 0x2000 00:22:27.449 TLSTESTn1 : 10.02 4261.09 16.64 0.00 0.00 29992.16 5515.95 82138.45 00:22:27.449 =================================================================================================================== 00:22:27.449 Total : 4261.09 16.64 0.00 0.00 29992.16 5515.95 82138.45 00:22:27.449 0 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:27.449 nvmf_trace.0 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1389181 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1389181 ']' 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1389181 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.449 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1389181 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1389181' 00:22:27.709 killing process with pid 1389181 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1389181 00:22:27.709 Received shutdown signal, test time was about 10.000000 seconds 00:22:27.709 00:22:27.709 Latency(us) 00:22:27.709 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:27.709 =================================================================================================================== 00:22:27.709 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:27.709 [2024-07-15 20:36:19.855545] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1389181 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.709 20:36:19 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.709 rmmod nvme_tcp 00:22:27.709 rmmod nvme_fabrics 00:22:27.709 rmmod nvme_keyring 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 1388844 ']' 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 1388844 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 1388844 ']' 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 1388844 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1388844 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1388844' 00:22:27.709 killing process with pid 1388844 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 1388844 00:22:27.709 [2024-07-15 20:36:20.086902] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:27.709 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 1388844 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:27.970 20:36:20 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:30.514 20:36:22 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:30.514 20:36:22 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:22:30.514 00:22:30.514 real 0m23.448s 00:22:30.514 user 0m23.900s 00:22:30.514 sys 0m10.259s 00:22:30.514 20:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:30.514 20:36:22 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:30.514 ************************************ 00:22:30.514 END TEST nvmf_fips 00:22:30.514 ************************************ 00:22:30.514 20:36:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:30.514 20:36:22 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 0 -eq 1 ']' 00:22:30.514 20:36:22 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:22:30.514 20:36:22 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:22:30.514 20:36:22 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:22:30.514 20:36:22 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:22:30.514 20:36:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.649 20:36:30 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:38.650 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:38.650 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:38.650 Found net devices under 0000:31:00.0: cvl_0_0 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:38.650 Found net devices under 0000:31:00.1: cvl_0_1 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:22:38.650 20:36:30 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:38.650 20:36:30 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:38.650 20:36:30 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:38.650 20:36:30 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:38.650 ************************************ 00:22:38.650 START TEST nvmf_perf_adq 00:22:38.650 ************************************ 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:38.650 * Looking for test storage... 00:22:38.650 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:38.650 20:36:30 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:46.787 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:46.787 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:46.787 Found net devices under 0000:31:00.0: cvl_0_0 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:46.787 Found net devices under 0000:31:00.1: cvl_0_1 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:22:46.787 20:36:38 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:22:47.728 20:36:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:22:49.639 20:36:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:22:54.922 Found 0000:31:00.0 (0x8086 - 0x159b) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:22:54.922 Found 0000:31:00.1 (0x8086 - 0x159b) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:22:54.922 Found net devices under 0000:31:00.0: cvl_0_0 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:22:54.922 Found net devices under 0000:31:00.1: cvl_0_1 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:54.922 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:54.923 20:36:46 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:54.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:54.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.563 ms 00:22:54.923 00:22:54.923 --- 10.0.0.2 ping statistics --- 00:22:54.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.923 rtt min/avg/max/mdev = 0.563/0.563/0.563/0.000 ms 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:54.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:54.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:22:54.923 00:22:54.923 --- 10.0.0.1 ping statistics --- 00:22:54.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:54.923 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1402460 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1402460 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1402460 ']' 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:54.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:54.923 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:54.923 [2024-07-15 20:36:47.114175] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:22:54.923 [2024-07-15 20:36:47.114224] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:54.923 EAL: No free 2048 kB hugepages reported on node 1 00:22:54.923 [2024-07-15 20:36:47.186374] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:54.923 [2024-07-15 20:36:47.252781] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:54.923 [2024-07-15 20:36:47.252822] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:54.923 [2024-07-15 20:36:47.252830] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:54.923 [2024-07-15 20:36:47.252837] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:54.923 [2024-07-15 20:36:47.252842] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:54.923 [2024-07-15 20:36:47.252975] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:54.923 [2024-07-15 20:36:47.253099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:54.923 [2024-07-15 20:36:47.253281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.923 [2024-07-15 20:36:47.253281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:22:55.861 20:36:47 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 [2024-07-15 20:36:48.085273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 Malloc1 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:55.861 [2024-07-15 20:36:48.144695] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=1402733 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:22:55.861 20:36:48 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:22:55.861 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:22:58.402 "tick_rate": 2400000000, 00:22:58.402 "poll_groups": [ 00:22:58.402 { 00:22:58.402 "name": "nvmf_tgt_poll_group_000", 00:22:58.402 "admin_qpairs": 1, 00:22:58.402 "io_qpairs": 1, 00:22:58.402 "current_admin_qpairs": 1, 00:22:58.402 "current_io_qpairs": 1, 00:22:58.402 "pending_bdev_io": 0, 00:22:58.402 "completed_nvme_io": 21129, 00:22:58.402 "transports": [ 00:22:58.402 { 00:22:58.402 "trtype": "TCP" 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "name": "nvmf_tgt_poll_group_001", 00:22:58.402 "admin_qpairs": 0, 00:22:58.402 "io_qpairs": 1, 00:22:58.402 "current_admin_qpairs": 0, 00:22:58.402 "current_io_qpairs": 1, 00:22:58.402 "pending_bdev_io": 0, 00:22:58.402 "completed_nvme_io": 29613, 00:22:58.402 "transports": [ 00:22:58.402 { 00:22:58.402 "trtype": "TCP" 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "name": "nvmf_tgt_poll_group_002", 00:22:58.402 "admin_qpairs": 0, 00:22:58.402 "io_qpairs": 1, 00:22:58.402 "current_admin_qpairs": 0, 00:22:58.402 "current_io_qpairs": 1, 00:22:58.402 "pending_bdev_io": 0, 00:22:58.402 "completed_nvme_io": 20634, 00:22:58.402 "transports": [ 00:22:58.402 { 00:22:58.402 "trtype": "TCP" 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }, 00:22:58.402 { 00:22:58.402 "name": "nvmf_tgt_poll_group_003", 00:22:58.402 "admin_qpairs": 0, 00:22:58.402 "io_qpairs": 1, 00:22:58.402 "current_admin_qpairs": 0, 00:22:58.402 "current_io_qpairs": 1, 00:22:58.402 "pending_bdev_io": 0, 00:22:58.402 "completed_nvme_io": 21116, 00:22:58.402 "transports": [ 00:22:58.402 { 00:22:58.402 "trtype": "TCP" 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 } 00:22:58.402 ] 00:22:58.402 }' 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:22:58.402 20:36:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 1402733 00:23:06.535 Initializing NVMe Controllers 00:23:06.535 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:06.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:06.535 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:06.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:06.536 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:06.536 Initialization complete. Launching workers. 00:23:06.536 ======================================================== 00:23:06.536 Latency(us) 00:23:06.536 Device Information : IOPS MiB/s Average min max 00:23:06.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 11572.60 45.21 5531.57 1870.90 8790.98 00:23:06.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 15251.40 59.58 4196.41 1304.95 9832.94 00:23:06.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13934.80 54.43 4593.07 1091.29 10943.95 00:23:06.536 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 14410.00 56.29 4441.87 1217.46 11455.99 00:23:06.536 ======================================================== 00:23:06.536 Total : 55168.79 215.50 4640.79 1091.29 11455.99 00:23:06.536 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:06.536 rmmod nvme_tcp 00:23:06.536 rmmod nvme_fabrics 00:23:06.536 rmmod nvme_keyring 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1402460 ']' 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1402460 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1402460 ']' 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1402460 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1402460 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1402460' 00:23:06.536 killing process with pid 1402460 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1402460 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1402460 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:06.536 20:36:58 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:08.447 20:37:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:08.447 20:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:23:08.447 20:37:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:23:09.827 20:37:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:23:11.737 20:37:03 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:17.015 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:17.015 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:17.015 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:17.016 Found net devices under 0000:31:00.0: cvl_0_0 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:17.016 Found net devices under 0000:31:00.1: cvl_0_1 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:17.016 20:37:08 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:17.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:17.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.720 ms 00:23:17.016 00:23:17.016 --- 10.0.0.2 ping statistics --- 00:23:17.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.016 rtt min/avg/max/mdev = 0.720/0.720/0.720/0.000 ms 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:17.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:17.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:23:17.016 00:23:17.016 --- 10.0.0.1 ping statistics --- 00:23:17.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:17.016 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:17.016 net.core.busy_poll = 1 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:17.016 net.core.busy_read = 1 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:17.016 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=1407278 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 1407278 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 1407278 ']' 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:17.277 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.277 20:37:09 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:17.277 [2024-07-15 20:37:09.509108] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:17.277 [2024-07-15 20:37:09.509159] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:17.277 EAL: No free 2048 kB hugepages reported on node 1 00:23:17.277 [2024-07-15 20:37:09.603108] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:17.536 [2024-07-15 20:37:09.673053] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:17.536 [2024-07-15 20:37:09.673092] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:17.536 [2024-07-15 20:37:09.673099] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:17.536 [2024-07-15 20:37:09.673104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:17.536 [2024-07-15 20:37:09.673110] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:17.536 [2024-07-15 20:37:09.673263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:17.536 [2024-07-15 20:37:09.673383] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.536 [2024-07-15 20:37:09.673532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:17.536 [2024-07-15 20:37:09.673534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.107 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.367 [2024-07-15 20:37:10.520296] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.367 Malloc1 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:18.367 [2024-07-15 20:37:10.579826] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=1407433 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:23:18.367 20:37:10 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:18.367 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:23:20.416 "tick_rate": 2400000000, 00:23:20.416 "poll_groups": [ 00:23:20.416 { 00:23:20.416 "name": "nvmf_tgt_poll_group_000", 00:23:20.416 "admin_qpairs": 1, 00:23:20.416 "io_qpairs": 2, 00:23:20.416 "current_admin_qpairs": 1, 00:23:20.416 "current_io_qpairs": 2, 00:23:20.416 "pending_bdev_io": 0, 00:23:20.416 "completed_nvme_io": 29247, 00:23:20.416 "transports": [ 00:23:20.416 { 00:23:20.416 "trtype": "TCP" 00:23:20.416 } 00:23:20.416 ] 00:23:20.416 }, 00:23:20.416 { 00:23:20.416 "name": "nvmf_tgt_poll_group_001", 00:23:20.416 "admin_qpairs": 0, 00:23:20.416 "io_qpairs": 2, 00:23:20.416 "current_admin_qpairs": 0, 00:23:20.416 "current_io_qpairs": 2, 00:23:20.416 "pending_bdev_io": 0, 00:23:20.416 "completed_nvme_io": 40989, 00:23:20.416 "transports": [ 00:23:20.416 { 00:23:20.416 "trtype": "TCP" 00:23:20.416 } 00:23:20.416 ] 00:23:20.416 }, 00:23:20.416 { 00:23:20.416 "name": "nvmf_tgt_poll_group_002", 00:23:20.416 "admin_qpairs": 0, 00:23:20.416 "io_qpairs": 0, 00:23:20.416 "current_admin_qpairs": 0, 00:23:20.416 "current_io_qpairs": 0, 00:23:20.416 "pending_bdev_io": 0, 00:23:20.416 "completed_nvme_io": 0, 00:23:20.416 "transports": [ 00:23:20.416 { 00:23:20.416 "trtype": "TCP" 00:23:20.416 } 00:23:20.416 ] 00:23:20.416 }, 00:23:20.416 { 00:23:20.416 "name": "nvmf_tgt_poll_group_003", 00:23:20.416 "admin_qpairs": 0, 00:23:20.416 "io_qpairs": 0, 00:23:20.416 "current_admin_qpairs": 0, 00:23:20.416 "current_io_qpairs": 0, 00:23:20.416 "pending_bdev_io": 0, 00:23:20.416 "completed_nvme_io": 0, 00:23:20.416 "transports": [ 00:23:20.416 { 00:23:20.416 "trtype": "TCP" 00:23:20.416 } 00:23:20.416 ] 00:23:20.416 } 00:23:20.416 ] 00:23:20.416 }' 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:23:20.416 20:37:12 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 1407433 00:23:28.541 Initializing NVMe Controllers 00:23:28.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:28.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:28.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:28.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:28.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:28.541 Initialization complete. Launching workers. 00:23:28.541 ======================================================== 00:23:28.541 Latency(us) 00:23:28.541 Device Information : IOPS MiB/s Average min max 00:23:28.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7379.50 28.83 8673.92 1197.13 52641.56 00:23:28.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10540.20 41.17 6072.50 1209.88 49044.34 00:23:28.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10604.40 41.42 6046.53 1275.47 49397.12 00:23:28.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 12632.19 49.34 5082.26 902.98 50324.00 00:23:28.541 ======================================================== 00:23:28.541 Total : 41156.28 160.77 6228.32 902.98 52641.56 00:23:28.541 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:28.541 rmmod nvme_tcp 00:23:28.541 rmmod nvme_fabrics 00:23:28.541 rmmod nvme_keyring 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 1407278 ']' 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 1407278 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 1407278 ']' 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 1407278 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1407278 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1407278' 00:23:28.541 killing process with pid 1407278 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 1407278 00:23:28.541 20:37:20 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 1407278 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:28.801 20:37:21 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.336 20:37:23 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:31.336 20:37:23 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:23:31.336 00:23:31.336 real 0m52.901s 00:23:31.336 user 2m49.457s 00:23:31.336 sys 0m11.387s 00:23:31.336 20:37:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:31.336 20:37:23 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:31.336 ************************************ 00:23:31.336 END TEST nvmf_perf_adq 00:23:31.336 ************************************ 00:23:31.336 20:37:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:31.336 20:37:23 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:31.336 20:37:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:31.336 20:37:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.336 20:37:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:31.336 ************************************ 00:23:31.336 START TEST nvmf_shutdown 00:23:31.336 ************************************ 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:31.336 * Looking for test storage... 00:23:31.336 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.336 20:37:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:31.337 ************************************ 00:23:31.337 START TEST nvmf_shutdown_tc1 00:23:31.337 ************************************ 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:31.337 20:37:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:39.478 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:39.478 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:39.479 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:39.479 Found net devices under 0000:31:00.0: cvl_0_0 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:39.479 Found net devices under 0000:31:00.1: cvl_0_1 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:39.479 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:39.479 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.649 ms 00:23:39.479 00:23:39.479 --- 10.0.0.2 ping statistics --- 00:23:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.479 rtt min/avg/max/mdev = 0.649/0.649/0.649/0.000 ms 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:39.479 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:39.479 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:23:39.479 00:23:39.479 --- 10.0.0.1 ping statistics --- 00:23:39.479 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:39.479 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=1414231 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 1414231 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1414231 ']' 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:39.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:39.479 20:37:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:39.479 [2024-07-15 20:37:31.696922] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:39.479 [2024-07-15 20:37:31.697013] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:39.479 EAL: No free 2048 kB hugepages reported on node 1 00:23:39.479 [2024-07-15 20:37:31.789251] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:39.740 [2024-07-15 20:37:31.885584] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:39.740 [2024-07-15 20:37:31.885640] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:39.740 [2024-07-15 20:37:31.885647] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:39.740 [2024-07-15 20:37:31.885652] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:39.740 [2024-07-15 20:37:31.885657] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:39.740 [2024-07-15 20:37:31.885792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:39.740 [2024-07-15 20:37:31.885960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:39.740 [2024-07-15 20:37:31.886129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:39.740 [2024-07-15 20:37:31.886131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.311 [2024-07-15 20:37:32.559009] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:40.311 20:37:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.311 Malloc1 00:23:40.311 [2024-07-15 20:37:32.662445] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.311 Malloc2 00:23:40.571 Malloc3 00:23:40.571 Malloc4 00:23:40.571 Malloc5 00:23:40.571 Malloc6 00:23:40.571 Malloc7 00:23:40.571 Malloc8 00:23:40.833 Malloc9 00:23:40.833 Malloc10 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=1414504 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 1414504 /var/tmp/bdevperf.sock 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 1414504 ']' 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:40.833 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 [2024-07-15 20:37:33.110353] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:40.833 [2024-07-15 20:37:33.110409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.833 { 00:23:40.833 "params": { 00:23:40.833 "name": "Nvme$subsystem", 00:23:40.833 "trtype": "$TEST_TRANSPORT", 00:23:40.833 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.833 "adrfam": "ipv4", 00:23:40.833 "trsvcid": "$NVMF_PORT", 00:23:40.833 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.833 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.833 "hdgst": ${hdgst:-false}, 00:23:40.833 "ddgst": ${ddgst:-false} 00:23:40.833 }, 00:23:40.833 "method": "bdev_nvme_attach_controller" 00:23:40.833 } 00:23:40.833 EOF 00:23:40.833 )") 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.833 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.834 { 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme$subsystem", 00:23:40.834 "trtype": "$TEST_TRANSPORT", 00:23:40.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "$NVMF_PORT", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.834 "hdgst": ${hdgst:-false}, 00:23:40.834 "ddgst": ${ddgst:-false} 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 } 00:23:40.834 EOF 00:23:40.834 )") 00:23:40.834 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.834 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:40.834 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:40.834 { 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme$subsystem", 00:23:40.834 "trtype": "$TEST_TRANSPORT", 00:23:40.834 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "$NVMF_PORT", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:40.834 "hdgst": ${hdgst:-false}, 00:23:40.834 "ddgst": ${ddgst:-false} 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 } 00:23:40.834 EOF 00:23:40.834 )") 00:23:40.834 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:40.834 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.834 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:40.834 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:40.834 20:37:33 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme1", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme2", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme3", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme4", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme5", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme6", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme7", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme8", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme9", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 },{ 00:23:40.834 "params": { 00:23:40.834 "name": "Nvme10", 00:23:40.834 "trtype": "tcp", 00:23:40.834 "traddr": "10.0.0.2", 00:23:40.834 "adrfam": "ipv4", 00:23:40.834 "trsvcid": "4420", 00:23:40.834 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:40.834 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:40.834 "hdgst": false, 00:23:40.834 "ddgst": false 00:23:40.834 }, 00:23:40.834 "method": "bdev_nvme_attach_controller" 00:23:40.834 }' 00:23:40.834 [2024-07-15 20:37:33.178888] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.106 [2024-07-15 20:37:33.243854] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 1414504 00:23:42.487 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:23:42.488 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 1414504 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:42.488 20:37:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 1414231 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.428 { 00:23:43.428 "params": { 00:23:43.428 "name": "Nvme$subsystem", 00:23:43.428 "trtype": "$TEST_TRANSPORT", 00:23:43.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.428 "adrfam": "ipv4", 00:23:43.428 "trsvcid": "$NVMF_PORT", 00:23:43.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.428 "hdgst": ${hdgst:-false}, 00:23:43.428 "ddgst": ${ddgst:-false} 00:23:43.428 }, 00:23:43.428 "method": "bdev_nvme_attach_controller" 00:23:43.428 } 00:23:43.428 EOF 00:23:43.428 )") 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.428 { 00:23:43.428 "params": { 00:23:43.428 "name": "Nvme$subsystem", 00:23:43.428 "trtype": "$TEST_TRANSPORT", 00:23:43.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.428 "adrfam": "ipv4", 00:23:43.428 "trsvcid": "$NVMF_PORT", 00:23:43.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.428 "hdgst": ${hdgst:-false}, 00:23:43.428 "ddgst": ${ddgst:-false} 00:23:43.428 }, 00:23:43.428 "method": "bdev_nvme_attach_controller" 00:23:43.428 } 00:23:43.428 EOF 00:23:43.428 )") 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.428 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.428 { 00:23:43.428 "params": { 00:23:43.428 "name": "Nvme$subsystem", 00:23:43.428 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.429 { 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme$subsystem", 00:23:43.429 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.429 { 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme$subsystem", 00:23:43.429 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.429 { 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme$subsystem", 00:23:43.429 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.429 { 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme$subsystem", 00:23:43.429 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.429 { 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme$subsystem", 00:23:43.429 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 [2024-07-15 20:37:35.625469] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:43.429 [2024-07-15 20:37:35.625519] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1415195 ] 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.429 { 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme$subsystem", 00:23:43.429 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:43.429 { 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme$subsystem", 00:23:43.429 "trtype": "$TEST_TRANSPORT", 00:23:43.429 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "$NVMF_PORT", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.429 "hdgst": ${hdgst:-false}, 00:23:43.429 "ddgst": ${ddgst:-false} 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 } 00:23:43.429 EOF 00:23:43.429 )") 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:23:43.429 20:37:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme1", 00:23:43.429 "trtype": "tcp", 00:23:43.429 "traddr": "10.0.0.2", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "4420", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.429 "hdgst": false, 00:23:43.429 "ddgst": false 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 },{ 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme2", 00:23:43.429 "trtype": "tcp", 00:23:43.429 "traddr": "10.0.0.2", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "4420", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:43.429 "hdgst": false, 00:23:43.429 "ddgst": false 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 },{ 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme3", 00:23:43.429 "trtype": "tcp", 00:23:43.429 "traddr": "10.0.0.2", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "4420", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:43.429 "hdgst": false, 00:23:43.429 "ddgst": false 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 },{ 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme4", 00:23:43.429 "trtype": "tcp", 00:23:43.429 "traddr": "10.0.0.2", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "4420", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:43.429 "hdgst": false, 00:23:43.429 "ddgst": false 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 },{ 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme5", 00:23:43.429 "trtype": "tcp", 00:23:43.429 "traddr": "10.0.0.2", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "4420", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:43.429 "hdgst": false, 00:23:43.429 "ddgst": false 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 },{ 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme6", 00:23:43.429 "trtype": "tcp", 00:23:43.429 "traddr": "10.0.0.2", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "4420", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:43.429 "hdgst": false, 00:23:43.429 "ddgst": false 00:23:43.429 }, 00:23:43.429 "method": "bdev_nvme_attach_controller" 00:23:43.429 },{ 00:23:43.429 "params": { 00:23:43.429 "name": "Nvme7", 00:23:43.429 "trtype": "tcp", 00:23:43.429 "traddr": "10.0.0.2", 00:23:43.429 "adrfam": "ipv4", 00:23:43.429 "trsvcid": "4420", 00:23:43.429 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:43.429 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:43.429 "hdgst": false, 00:23:43.430 "ddgst": false 00:23:43.430 }, 00:23:43.430 "method": "bdev_nvme_attach_controller" 00:23:43.430 },{ 00:23:43.430 "params": { 00:23:43.430 "name": "Nvme8", 00:23:43.430 "trtype": "tcp", 00:23:43.430 "traddr": "10.0.0.2", 00:23:43.430 "adrfam": "ipv4", 00:23:43.430 "trsvcid": "4420", 00:23:43.430 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:43.430 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:43.430 "hdgst": false, 00:23:43.430 "ddgst": false 00:23:43.430 }, 00:23:43.430 "method": "bdev_nvme_attach_controller" 00:23:43.430 },{ 00:23:43.430 "params": { 00:23:43.430 "name": "Nvme9", 00:23:43.430 "trtype": "tcp", 00:23:43.430 "traddr": "10.0.0.2", 00:23:43.430 "adrfam": "ipv4", 00:23:43.430 "trsvcid": "4420", 00:23:43.430 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:43.430 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:43.430 "hdgst": false, 00:23:43.430 "ddgst": false 00:23:43.430 }, 00:23:43.430 "method": "bdev_nvme_attach_controller" 00:23:43.430 },{ 00:23:43.430 "params": { 00:23:43.430 "name": "Nvme10", 00:23:43.430 "trtype": "tcp", 00:23:43.430 "traddr": "10.0.0.2", 00:23:43.430 "adrfam": "ipv4", 00:23:43.430 "trsvcid": "4420", 00:23:43.430 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:43.430 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:43.430 "hdgst": false, 00:23:43.430 "ddgst": false 00:23:43.430 }, 00:23:43.430 "method": "bdev_nvme_attach_controller" 00:23:43.430 }' 00:23:43.430 EAL: No free 2048 kB hugepages reported on node 1 00:23:43.430 [2024-07-15 20:37:35.691949] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:43.430 [2024-07-15 20:37:35.756243] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.813 Running I/O for 1 seconds... 00:23:46.195 00:23:46.195 Latency(us) 00:23:46.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:46.195 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.195 Verification LBA range: start 0x0 length 0x400 00:23:46.195 Nvme1n1 : 1.12 228.01 14.25 0.00 0.00 277696.64 17913.17 234181.97 00:23:46.195 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.195 Verification LBA range: start 0x0 length 0x400 00:23:46.195 Nvme2n1 : 1.12 228.71 14.29 0.00 0.00 272309.97 23811.41 239424.85 00:23:46.196 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme3n1 : 1.18 270.32 16.90 0.00 0.00 226848.26 17913.17 237677.23 00:23:46.196 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme4n1 : 1.11 230.14 14.38 0.00 0.00 261181.44 20316.16 244667.73 00:23:46.196 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme5n1 : 1.13 225.78 14.11 0.00 0.00 261709.44 18240.85 253405.87 00:23:46.196 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme6n1 : 1.13 226.36 14.15 0.00 0.00 256162.35 17913.17 244667.73 00:23:46.196 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme7n1 : 1.19 269.89 16.87 0.00 0.00 211935.12 1433.60 242920.11 00:23:46.196 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme8n1 : 1.19 269.25 16.83 0.00 0.00 208840.36 18240.85 246415.36 00:23:46.196 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme9n1 : 1.21 265.09 16.57 0.00 0.00 208765.18 13161.81 248162.99 00:23:46.196 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:46.196 Verification LBA range: start 0x0 length 0x400 00:23:46.196 Nvme10n1 : 1.21 264.35 16.52 0.00 0.00 205698.99 13107.20 265639.25 00:23:46.196 =================================================================================================================== 00:23:46.196 Total : 2477.89 154.87 0.00 0.00 236131.62 1433.60 265639.25 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:46.196 rmmod nvme_tcp 00:23:46.196 rmmod nvme_fabrics 00:23:46.196 rmmod nvme_keyring 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 1414231 ']' 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 1414231 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 1414231 ']' 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 1414231 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1414231 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1414231' 00:23:46.196 killing process with pid 1414231 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 1414231 00:23:46.196 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 1414231 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:46.456 20:37:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:48.997 00:23:48.997 real 0m17.456s 00:23:48.997 user 0m33.876s 00:23:48.997 sys 0m7.206s 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.997 ************************************ 00:23:48.997 END TEST nvmf_shutdown_tc1 00:23:48.997 ************************************ 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:48.997 ************************************ 00:23:48.997 START TEST nvmf_shutdown_tc2 00:23:48.997 ************************************ 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.997 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:48.998 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:48.998 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:48.998 Found net devices under 0000:31:00.0: cvl_0_0 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:48.998 Found net devices under 0000:31:00.1: cvl_0_1 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.998 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:48.999 20:37:40 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:48.999 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.999 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:23:48.999 00:23:48.999 --- 10.0.0.2 ping statistics --- 00:23:48.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.999 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:48.999 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.999 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:23:48.999 00:23:48.999 --- 10.0.0.1 ping statistics --- 00:23:48.999 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.999 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1416301 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1416301 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1416301 ']' 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.999 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:48.999 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:48.999 [2024-07-15 20:37:41.332982] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:48.999 [2024-07-15 20:37:41.333041] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.999 EAL: No free 2048 kB hugepages reported on node 1 00:23:49.259 [2024-07-15 20:37:41.400442] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:49.259 [2024-07-15 20:37:41.455945] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:49.259 [2024-07-15 20:37:41.455976] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:49.259 [2024-07-15 20:37:41.455982] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:49.259 [2024-07-15 20:37:41.455988] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:49.259 [2024-07-15 20:37:41.455992] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:49.259 [2024-07-15 20:37:41.456097] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:49.259 [2024-07-15 20:37:41.456251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:49.259 [2024-07-15 20:37:41.456372] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:49.259 [2024-07-15 20:37:41.456551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.259 [2024-07-15 20:37:41.599004] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.259 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.260 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.260 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.260 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.260 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.260 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.260 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.260 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:49.519 20:37:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.519 Malloc1 00:23:49.519 [2024-07-15 20:37:41.697862] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:49.519 Malloc2 00:23:49.519 Malloc3 00:23:49.519 Malloc4 00:23:49.519 Malloc5 00:23:49.519 Malloc6 00:23:49.780 Malloc7 00:23:49.780 Malloc8 00:23:49.780 Malloc9 00:23:49.780 Malloc10 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=1416463 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 1416463 /var/tmp/bdevperf.sock 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1416463 ']' 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:49.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.780 { 00:23:49.780 "params": { 00:23:49.780 "name": "Nvme$subsystem", 00:23:49.780 "trtype": "$TEST_TRANSPORT", 00:23:49.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.780 "adrfam": "ipv4", 00:23:49.780 "trsvcid": "$NVMF_PORT", 00:23:49.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.780 "hdgst": ${hdgst:-false}, 00:23:49.780 "ddgst": ${ddgst:-false} 00:23:49.780 }, 00:23:49.780 "method": "bdev_nvme_attach_controller" 00:23:49.780 } 00:23:49.780 EOF 00:23:49.780 )") 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.780 { 00:23:49.780 "params": { 00:23:49.780 "name": "Nvme$subsystem", 00:23:49.780 "trtype": "$TEST_TRANSPORT", 00:23:49.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.780 "adrfam": "ipv4", 00:23:49.780 "trsvcid": "$NVMF_PORT", 00:23:49.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.780 "hdgst": ${hdgst:-false}, 00:23:49.780 "ddgst": ${ddgst:-false} 00:23:49.780 }, 00:23:49.780 "method": "bdev_nvme_attach_controller" 00:23:49.780 } 00:23:49.780 EOF 00:23:49.780 )") 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.780 { 00:23:49.780 "params": { 00:23:49.780 "name": "Nvme$subsystem", 00:23:49.780 "trtype": "$TEST_TRANSPORT", 00:23:49.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.780 "adrfam": "ipv4", 00:23:49.780 "trsvcid": "$NVMF_PORT", 00:23:49.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.780 "hdgst": ${hdgst:-false}, 00:23:49.780 "ddgst": ${ddgst:-false} 00:23:49.780 }, 00:23:49.780 "method": "bdev_nvme_attach_controller" 00:23:49.780 } 00:23:49.780 EOF 00:23:49.780 )") 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.780 { 00:23:49.780 "params": { 00:23:49.780 "name": "Nvme$subsystem", 00:23:49.780 "trtype": "$TEST_TRANSPORT", 00:23:49.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.780 "adrfam": "ipv4", 00:23:49.780 "trsvcid": "$NVMF_PORT", 00:23:49.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.780 "hdgst": ${hdgst:-false}, 00:23:49.780 "ddgst": ${ddgst:-false} 00:23:49.780 }, 00:23:49.780 "method": "bdev_nvme_attach_controller" 00:23:49.780 } 00:23:49.780 EOF 00:23:49.780 )") 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.780 { 00:23:49.780 "params": { 00:23:49.780 "name": "Nvme$subsystem", 00:23:49.780 "trtype": "$TEST_TRANSPORT", 00:23:49.780 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.780 "adrfam": "ipv4", 00:23:49.780 "trsvcid": "$NVMF_PORT", 00:23:49.780 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.780 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.780 "hdgst": ${hdgst:-false}, 00:23:49.780 "ddgst": ${ddgst:-false} 00:23:49.780 }, 00:23:49.780 "method": "bdev_nvme_attach_controller" 00:23:49.780 } 00:23:49.780 EOF 00:23:49.780 )") 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.780 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.780 { 00:23:49.780 "params": { 00:23:49.780 "name": "Nvme$subsystem", 00:23:49.780 "trtype": "$TEST_TRANSPORT", 00:23:49.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.781 "adrfam": "ipv4", 00:23:49.781 "trsvcid": "$NVMF_PORT", 00:23:49.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.781 "hdgst": ${hdgst:-false}, 00:23:49.781 "ddgst": ${ddgst:-false} 00:23:49.781 }, 00:23:49.781 "method": "bdev_nvme_attach_controller" 00:23:49.781 } 00:23:49.781 EOF 00:23:49.781 )") 00:23:49.781 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.781 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.781 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.781 { 00:23:49.781 "params": { 00:23:49.781 "name": "Nvme$subsystem", 00:23:49.781 "trtype": "$TEST_TRANSPORT", 00:23:49.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.781 "adrfam": "ipv4", 00:23:49.781 "trsvcid": "$NVMF_PORT", 00:23:49.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.781 "hdgst": ${hdgst:-false}, 00:23:49.781 "ddgst": ${ddgst:-false} 00:23:49.781 }, 00:23:49.781 "method": "bdev_nvme_attach_controller" 00:23:49.781 } 00:23:49.781 EOF 00:23:49.781 )") 00:23:49.781 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:49.781 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:49.781 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:49.781 { 00:23:49.781 "params": { 00:23:49.781 "name": "Nvme$subsystem", 00:23:49.781 "trtype": "$TEST_TRANSPORT", 00:23:49.781 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:49.781 "adrfam": "ipv4", 00:23:49.781 "trsvcid": "$NVMF_PORT", 00:23:49.781 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:49.781 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:49.781 "hdgst": ${hdgst:-false}, 00:23:49.781 "ddgst": ${ddgst:-false} 00:23:49.781 }, 00:23:49.781 "method": "bdev_nvme_attach_controller" 00:23:49.781 } 00:23:49.781 EOF 00:23:49.781 )") 00:23:49.781 [2024-07-15 20:37:42.154333] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:49.781 [2024-07-15 20:37:42.154398] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1416463 ] 00:23:49.781 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.040 { 00:23:50.040 "params": { 00:23:50.040 "name": "Nvme$subsystem", 00:23:50.040 "trtype": "$TEST_TRANSPORT", 00:23:50.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.040 "adrfam": "ipv4", 00:23:50.040 "trsvcid": "$NVMF_PORT", 00:23:50.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.040 "hdgst": ${hdgst:-false}, 00:23:50.040 "ddgst": ${ddgst:-false} 00:23:50.040 }, 00:23:50.040 "method": "bdev_nvme_attach_controller" 00:23:50.040 } 00:23:50.040 EOF 00:23:50.040 )") 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:50.040 { 00:23:50.040 "params": { 00:23:50.040 "name": "Nvme$subsystem", 00:23:50.040 "trtype": "$TEST_TRANSPORT", 00:23:50.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:50.040 "adrfam": "ipv4", 00:23:50.040 "trsvcid": "$NVMF_PORT", 00:23:50.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:50.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:50.040 "hdgst": ${hdgst:-false}, 00:23:50.040 "ddgst": ${ddgst:-false} 00:23:50.040 }, 00:23:50.040 "method": "bdev_nvme_attach_controller" 00:23:50.040 } 00:23:50.040 EOF 00:23:50.040 )") 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:23:50.040 20:37:42 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:50.040 "params": { 00:23:50.040 "name": "Nvme1", 00:23:50.040 "trtype": "tcp", 00:23:50.040 "traddr": "10.0.0.2", 00:23:50.040 "adrfam": "ipv4", 00:23:50.040 "trsvcid": "4420", 00:23:50.040 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:50.040 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:50.040 "hdgst": false, 00:23:50.040 "ddgst": false 00:23:50.040 }, 00:23:50.040 "method": "bdev_nvme_attach_controller" 00:23:50.040 },{ 00:23:50.040 "params": { 00:23:50.040 "name": "Nvme2", 00:23:50.040 "trtype": "tcp", 00:23:50.040 "traddr": "10.0.0.2", 00:23:50.040 "adrfam": "ipv4", 00:23:50.040 "trsvcid": "4420", 00:23:50.040 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:50.040 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:50.040 "hdgst": false, 00:23:50.040 "ddgst": false 00:23:50.040 }, 00:23:50.040 "method": "bdev_nvme_attach_controller" 00:23:50.040 },{ 00:23:50.040 "params": { 00:23:50.040 "name": "Nvme3", 00:23:50.040 "trtype": "tcp", 00:23:50.040 "traddr": "10.0.0.2", 00:23:50.040 "adrfam": "ipv4", 00:23:50.040 "trsvcid": "4420", 00:23:50.040 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:50.040 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:50.040 "hdgst": false, 00:23:50.040 "ddgst": false 00:23:50.040 }, 00:23:50.040 "method": "bdev_nvme_attach_controller" 00:23:50.040 },{ 00:23:50.040 "params": { 00:23:50.040 "name": "Nvme4", 00:23:50.040 "trtype": "tcp", 00:23:50.040 "traddr": "10.0.0.2", 00:23:50.040 "adrfam": "ipv4", 00:23:50.040 "trsvcid": "4420", 00:23:50.040 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:50.040 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:50.041 "hdgst": false, 00:23:50.041 "ddgst": false 00:23:50.041 }, 00:23:50.041 "method": "bdev_nvme_attach_controller" 00:23:50.041 },{ 00:23:50.041 "params": { 00:23:50.041 "name": "Nvme5", 00:23:50.041 "trtype": "tcp", 00:23:50.041 "traddr": "10.0.0.2", 00:23:50.041 "adrfam": "ipv4", 00:23:50.041 "trsvcid": "4420", 00:23:50.041 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:50.041 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:50.041 "hdgst": false, 00:23:50.041 "ddgst": false 00:23:50.041 }, 00:23:50.041 "method": "bdev_nvme_attach_controller" 00:23:50.041 },{ 00:23:50.041 "params": { 00:23:50.041 "name": "Nvme6", 00:23:50.041 "trtype": "tcp", 00:23:50.041 "traddr": "10.0.0.2", 00:23:50.041 "adrfam": "ipv4", 00:23:50.041 "trsvcid": "4420", 00:23:50.041 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:50.041 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:50.041 "hdgst": false, 00:23:50.041 "ddgst": false 00:23:50.041 }, 00:23:50.041 "method": "bdev_nvme_attach_controller" 00:23:50.041 },{ 00:23:50.041 "params": { 00:23:50.041 "name": "Nvme7", 00:23:50.041 "trtype": "tcp", 00:23:50.041 "traddr": "10.0.0.2", 00:23:50.041 "adrfam": "ipv4", 00:23:50.041 "trsvcid": "4420", 00:23:50.041 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:50.041 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:50.041 "hdgst": false, 00:23:50.041 "ddgst": false 00:23:50.041 }, 00:23:50.041 "method": "bdev_nvme_attach_controller" 00:23:50.041 },{ 00:23:50.041 "params": { 00:23:50.041 "name": "Nvme8", 00:23:50.041 "trtype": "tcp", 00:23:50.041 "traddr": "10.0.0.2", 00:23:50.041 "adrfam": "ipv4", 00:23:50.041 "trsvcid": "4420", 00:23:50.041 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:50.041 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:50.041 "hdgst": false, 00:23:50.041 "ddgst": false 00:23:50.041 }, 00:23:50.041 "method": "bdev_nvme_attach_controller" 00:23:50.041 },{ 00:23:50.041 "params": { 00:23:50.041 "name": "Nvme9", 00:23:50.041 "trtype": "tcp", 00:23:50.041 "traddr": "10.0.0.2", 00:23:50.041 "adrfam": "ipv4", 00:23:50.041 "trsvcid": "4420", 00:23:50.041 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:50.041 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:50.041 "hdgst": false, 00:23:50.041 "ddgst": false 00:23:50.041 }, 00:23:50.041 "method": "bdev_nvme_attach_controller" 00:23:50.041 },{ 00:23:50.041 "params": { 00:23:50.041 "name": "Nvme10", 00:23:50.041 "trtype": "tcp", 00:23:50.041 "traddr": "10.0.0.2", 00:23:50.041 "adrfam": "ipv4", 00:23:50.041 "trsvcid": "4420", 00:23:50.041 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:50.041 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:50.041 "hdgst": false, 00:23:50.041 "ddgst": false 00:23:50.041 }, 00:23:50.041 "method": "bdev_nvme_attach_controller" 00:23:50.041 }' 00:23:50.041 EAL: No free 2048 kB hugepages reported on node 1 00:23:50.041 [2024-07-15 20:37:42.222632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.041 [2024-07-15 20:37:42.288368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:51.422 Running I/O for 10 seconds... 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:23:51.422 20:37:43 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:51.682 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:51.682 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:51.682 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:51.682 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:51.682 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:51.682 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:51.682 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:51.942 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:23:51.942 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:23:51.942 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 1416463 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1416463 ']' 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1416463 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416463 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416463' 00:23:52.203 killing process with pid 1416463 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1416463 00:23:52.203 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1416463 00:23:52.203 Received shutdown signal, test time was about 0.963535 seconds 00:23:52.203 00:23:52.203 Latency(us) 00:23:52.203 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:52.203 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme1n1 : 0.95 202.92 12.68 0.00 0.00 311305.39 26432.85 262144.00 00:23:52.203 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme2n1 : 0.96 265.94 16.62 0.00 0.00 232434.40 4942.51 230686.72 00:23:52.203 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme3n1 : 0.95 270.86 16.93 0.00 0.00 223402.45 13598.72 251658.24 00:23:52.203 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme4n1 : 0.95 268.22 16.76 0.00 0.00 220747.95 19005.44 249910.61 00:23:52.203 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme5n1 : 0.93 205.43 12.84 0.00 0.00 281960.96 25340.59 253405.87 00:23:52.203 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme6n1 : 0.96 267.16 16.70 0.00 0.00 211929.39 21517.65 244667.73 00:23:52.203 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme7n1 : 0.95 268.49 16.78 0.00 0.00 206391.04 20206.93 249910.61 00:23:52.203 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme8n1 : 0.92 208.09 13.01 0.00 0.00 258708.48 18568.53 255153.49 00:23:52.203 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme9n1 : 0.93 211.83 13.24 0.00 0.00 246377.48 2211.84 225443.84 00:23:52.203 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:52.203 Verification LBA range: start 0x0 length 0x400 00:23:52.203 Nvme10n1 : 0.94 204.58 12.79 0.00 0.00 250758.26 18131.63 274377.39 00:23:52.203 =================================================================================================================== 00:23:52.203 Total : 2373.52 148.35 0.00 0.00 240782.56 2211.84 274377.39 00:23:52.463 20:37:44 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 1416301 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:53.402 rmmod nvme_tcp 00:23:53.402 rmmod nvme_fabrics 00:23:53.402 rmmod nvme_keyring 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 1416301 ']' 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 1416301 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 1416301 ']' 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 1416301 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1416301 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1416301' 00:23:53.402 killing process with pid 1416301 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 1416301 00:23:53.402 20:37:45 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 1416301 00:23:53.660 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:53.660 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:53.660 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:53.660 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:53.661 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:53.661 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.661 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:53.661 20:37:46 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:56.202 00:23:56.202 real 0m7.153s 00:23:56.202 user 0m20.723s 00:23:56.202 sys 0m1.183s 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:56.202 ************************************ 00:23:56.202 END TEST nvmf_shutdown_tc2 00:23:56.202 ************************************ 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:56.202 ************************************ 00:23:56.202 START TEST nvmf_shutdown_tc3 00:23:56.202 ************************************ 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:23:56.202 Found 0000:31:00.0 (0x8086 - 0x159b) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:23:56.202 Found 0000:31:00.1 (0x8086 - 0x159b) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:23:56.202 Found net devices under 0000:31:00.0: cvl_0_0 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.202 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:23:56.203 Found net devices under 0000:31:00.1: cvl_0_1 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.670 ms 00:23:56.203 00:23:56.203 --- 10.0.0.2 ping statistics --- 00:23:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.203 rtt min/avg/max/mdev = 0.670/0.670/0.670/0.000 ms 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.252 ms 00:23:56.203 00:23:56.203 --- 10.0.0.1 ping statistics --- 00:23:56.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.203 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=1417820 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 1417820 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1417820 ']' 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:56.203 20:37:48 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:56.463 [2024-07-15 20:37:48.593369] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:56.463 [2024-07-15 20:37:48.593434] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.463 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.463 [2024-07-15 20:37:48.689516] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:56.463 [2024-07-15 20:37:48.749688] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.463 [2024-07-15 20:37:48.749722] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.463 [2024-07-15 20:37:48.749728] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.463 [2024-07-15 20:37:48.749732] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.463 [2024-07-15 20:37:48.749736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.463 [2024-07-15 20:37:48.749849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:56.463 [2024-07-15 20:37:48.750007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:56.463 [2024-07-15 20:37:48.750162] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:56.463 [2024-07-15 20:37:48.750165] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.034 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.294 [2024-07-15 20:37:49.413463] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.294 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.294 Malloc1 00:23:57.294 [2024-07-15 20:37:49.512345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.294 Malloc2 00:23:57.294 Malloc3 00:23:57.294 Malloc4 00:23:57.294 Malloc5 00:23:57.554 Malloc6 00:23:57.554 Malloc7 00:23:57.554 Malloc8 00:23:57.554 Malloc9 00:23:57.554 Malloc10 00:23:57.554 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.554 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:23:57.554 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:57.554 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.554 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=1418208 00:23:57.554 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 1418208 /var/tmp/bdevperf.sock 00:23:57.554 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 1418208 ']' 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.555 { 00:23:57.555 "params": { 00:23:57.555 "name": "Nvme$subsystem", 00:23:57.555 "trtype": "$TEST_TRANSPORT", 00:23:57.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.555 "adrfam": "ipv4", 00:23:57.555 "trsvcid": "$NVMF_PORT", 00:23:57.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.555 "hdgst": ${hdgst:-false}, 00:23:57.555 "ddgst": ${ddgst:-false} 00:23:57.555 }, 00:23:57.555 "method": "bdev_nvme_attach_controller" 00:23:57.555 } 00:23:57.555 EOF 00:23:57.555 )") 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.555 { 00:23:57.555 "params": { 00:23:57.555 "name": "Nvme$subsystem", 00:23:57.555 "trtype": "$TEST_TRANSPORT", 00:23:57.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.555 "adrfam": "ipv4", 00:23:57.555 "trsvcid": "$NVMF_PORT", 00:23:57.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.555 "hdgst": ${hdgst:-false}, 00:23:57.555 "ddgst": ${ddgst:-false} 00:23:57.555 }, 00:23:57.555 "method": "bdev_nvme_attach_controller" 00:23:57.555 } 00:23:57.555 EOF 00:23:57.555 )") 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.555 { 00:23:57.555 "params": { 00:23:57.555 "name": "Nvme$subsystem", 00:23:57.555 "trtype": "$TEST_TRANSPORT", 00:23:57.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.555 "adrfam": "ipv4", 00:23:57.555 "trsvcid": "$NVMF_PORT", 00:23:57.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.555 "hdgst": ${hdgst:-false}, 00:23:57.555 "ddgst": ${ddgst:-false} 00:23:57.555 }, 00:23:57.555 "method": "bdev_nvme_attach_controller" 00:23:57.555 } 00:23:57.555 EOF 00:23:57.555 )") 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.555 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.555 { 00:23:57.555 "params": { 00:23:57.555 "name": "Nvme$subsystem", 00:23:57.555 "trtype": "$TEST_TRANSPORT", 00:23:57.555 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.555 "adrfam": "ipv4", 00:23:57.555 "trsvcid": "$NVMF_PORT", 00:23:57.555 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.555 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.555 "hdgst": ${hdgst:-false}, 00:23:57.555 "ddgst": ${ddgst:-false} 00:23:57.555 }, 00:23:57.555 "method": "bdev_nvme_attach_controller" 00:23:57.555 } 00:23:57.555 EOF 00:23:57.555 )") 00:23:57.815 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.815 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.815 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.815 { 00:23:57.815 "params": { 00:23:57.815 "name": "Nvme$subsystem", 00:23:57.815 "trtype": "$TEST_TRANSPORT", 00:23:57.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.815 "adrfam": "ipv4", 00:23:57.815 "trsvcid": "$NVMF_PORT", 00:23:57.815 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.815 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.815 "hdgst": ${hdgst:-false}, 00:23:57.815 "ddgst": ${ddgst:-false} 00:23:57.815 }, 00:23:57.815 "method": "bdev_nvme_attach_controller" 00:23:57.815 } 00:23:57.815 EOF 00:23:57.815 )") 00:23:57.815 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.815 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.815 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.815 { 00:23:57.815 "params": { 00:23:57.815 "name": "Nvme$subsystem", 00:23:57.815 "trtype": "$TEST_TRANSPORT", 00:23:57.815 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.815 "adrfam": "ipv4", 00:23:57.815 "trsvcid": "$NVMF_PORT", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.816 "hdgst": ${hdgst:-false}, 00:23:57.816 "ddgst": ${ddgst:-false} 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 } 00:23:57.816 EOF 00:23:57.816 )") 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.816 [2024-07-15 20:37:49.951871] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:23:57.816 [2024-07-15 20:37:49.951925] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1418208 ] 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.816 { 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme$subsystem", 00:23:57.816 "trtype": "$TEST_TRANSPORT", 00:23:57.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "$NVMF_PORT", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.816 "hdgst": ${hdgst:-false}, 00:23:57.816 "ddgst": ${ddgst:-false} 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 } 00:23:57.816 EOF 00:23:57.816 )") 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.816 { 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme$subsystem", 00:23:57.816 "trtype": "$TEST_TRANSPORT", 00:23:57.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "$NVMF_PORT", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.816 "hdgst": ${hdgst:-false}, 00:23:57.816 "ddgst": ${ddgst:-false} 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 } 00:23:57.816 EOF 00:23:57.816 )") 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.816 { 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme$subsystem", 00:23:57.816 "trtype": "$TEST_TRANSPORT", 00:23:57.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "$NVMF_PORT", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.816 "hdgst": ${hdgst:-false}, 00:23:57.816 "ddgst": ${ddgst:-false} 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 } 00:23:57.816 EOF 00:23:57.816 )") 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:23:57.816 { 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme$subsystem", 00:23:57.816 "trtype": "$TEST_TRANSPORT", 00:23:57.816 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "$NVMF_PORT", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:57.816 "hdgst": ${hdgst:-false}, 00:23:57.816 "ddgst": ${ddgst:-false} 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 } 00:23:57.816 EOF 00:23:57.816 )") 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:23:57.816 EAL: No free 2048 kB hugepages reported on node 1 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:23:57.816 20:37:49 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme1", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme2", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme3", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme4", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme5", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme6", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme7", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme8", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme9", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 },{ 00:23:57.816 "params": { 00:23:57.816 "name": "Nvme10", 00:23:57.816 "trtype": "tcp", 00:23:57.816 "traddr": "10.0.0.2", 00:23:57.816 "adrfam": "ipv4", 00:23:57.816 "trsvcid": "4420", 00:23:57.816 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:57.816 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:57.816 "hdgst": false, 00:23:57.816 "ddgst": false 00:23:57.816 }, 00:23:57.816 "method": "bdev_nvme_attach_controller" 00:23:57.816 }' 00:23:57.816 [2024-07-15 20:37:50.019358] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.816 [2024-07-15 20:37:50.085952] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:59.727 Running I/O for 10 seconds... 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 1417820 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 1417820 ']' 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 1417820 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1417820 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1417820' 00:24:00.312 killing process with pid 1417820 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 1417820 00:24:00.312 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 1417820 00:24:00.312 [2024-07-15 20:37:52.552881] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.552998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553007] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.312 [2024-07-15 20:37:52.553038] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553061] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553084] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553150] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553159] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553178] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553196] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553205] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b26f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553978] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.553998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554016] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554065] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554070] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554083] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554097] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554102] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554106] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554115] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554120] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554124] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554129] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554133] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554138] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554142] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554151] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554169] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554179] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554184] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554193] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554197] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.313 [2024-07-15 20:37:52.554206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554211] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554216] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554220] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554228] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554241] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554249] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554254] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.554263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b50f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555697] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555706] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555715] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555738] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555742] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555747] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555764] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555774] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555778] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555782] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555791] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555796] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555809] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555818] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555823] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555832] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555836] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555859] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555896] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555909] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555932] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555940] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555944] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555949] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555953] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555957] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555962] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.555972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3030 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b34f0 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556825] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556846] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.314 [2024-07-15 20:37:52.556873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556878] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556891] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556895] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556904] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556912] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556921] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556934] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556939] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556943] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556976] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556980] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556985] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.556998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557019] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557028] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557041] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557046] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557050] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557063] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557082] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557086] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557090] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3990 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557748] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557753] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557758] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557762] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557776] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557790] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557794] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557799] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557803] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557812] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557816] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557824] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557833] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557837] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557845] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557850] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557860] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557864] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557870] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557884] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557889] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557894] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557899] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557908] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.315 [2024-07-15 20:37:52.557917] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557922] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557927] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557946] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557950] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557961] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557965] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557970] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557975] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557979] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.557997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558029] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558033] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558037] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b3e50 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558353] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bc0 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558462] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f74da0 is same with the state(5) to be set 00:24:00.316 [2024-07-15 20:37:52.558493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.316 [2024-07-15 20:37:52.558539] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.316 [2024-07-15 20:37:52.558546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99e40 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.558576] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bfe0 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.558660] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558707] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558721] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fafd0 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.558742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558787] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558800] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f5d0 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.558848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.317 [2024-07-15 20:37:52.558905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 [2024-07-15 20:37:52.558912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32610 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.559423] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.559424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:1[2024-07-15 20:37:52.559437] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.317 he state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.559446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.559451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.317 [2024-07-15 20:37:52.559451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:37:52.559456] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.317 he state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559462] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559471] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559487] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with t[2024-07-15 20:37:52.559490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:00.318 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559497] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559530] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559536] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559547] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559551] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559556] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559577] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559593] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:37:52.559602] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 he state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559611] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:1[2024-07-15 20:37:52.559615] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with t28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 he state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559623] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559628] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559643] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559649] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559658] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559668] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559672] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:37:52.559678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 he state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559685] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559695] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:37:52.559700] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 he state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559707] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559711] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with t[2024-07-15 20:37:52.559710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:1he state(5) to be set 00:24:00.318 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559722] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559734] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559744] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.318 [2024-07-15 20:37:52.559749] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559755] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.318 [2024-07-15 20:37:52.559760] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.318 [2024-07-15 20:37:52.559765] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b42f0 is same with the state(5) to be set 00:24:00.319 [2024-07-15 20:37:52.559765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.559990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.559997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560362] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with t[2024-07-15 20:37:52.560366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(5) to be set 00:24:00.319 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.319 [2024-07-15 20:37:52.560380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 [2024-07-15 20:37:52.560383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.319 [2024-07-15 20:37:52.560387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:37:52.560389] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 he state(5) to be set 00:24:00.319 [2024-07-15 20:37:52.560396] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.319 [2024-07-15 20:37:52.560398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128[2024-07-15 20:37:52.560401] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with t SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.319 he state(5) to be set 00:24:00.319 [2024-07-15 20:37:52.560409] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.319 [2024-07-15 20:37:52.560409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.319 [2024-07-15 20:37:52.560414] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.560424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.560429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560435] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.560440] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:37:52.560446] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 he state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.560458] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560464] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.560469] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560474] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with t[2024-07-15 20:37:52.560474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128he state(5) to be set 00:24:00.320 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.560481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.560486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:12[2024-07-15 20:37:52.560495] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 he state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560503] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.560507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560512] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.560517] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-15 20:37:52.560522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 he state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560529] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:12[2024-07-15 20:37:52.560533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 he state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560540] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.560545] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.560555] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.560560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560565] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.560570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560575] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.560580] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.560600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.320 [2024-07-15 20:37:52.560642] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x202d820 was disconnected and freed. reset controller. 00:24:00.320 [2024-07-15 20:37:52.573867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573892] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573907] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573919] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573925] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573931] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573937] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573989] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.573994] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.574000] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.574006] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.574011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.574017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.574023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9b4790 is same with the state(5) to be set 00:24:00.320 [2024-07-15 20:37:52.581994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.582023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.320 [2024-07-15 20:37:52.582037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.320 [2024-07-15 20:37:52.582045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.321 [2024-07-15 20:37:52.582587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.321 [2024-07-15 20:37:52.582596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.582989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.582998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.583004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.583021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.583037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.322 [2024-07-15 20:37:52.583055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:00.322 [2024-07-15 20:37:52.583118] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20ba230 was disconnected and freed. reset controller. 00:24:00.322 [2024-07-15 20:37:52.583215] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f51bc0 (9): Bad file descriptor 00:24:00.322 [2024-07-15 20:37:52.583243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74da0 (9): Bad file descriptor 00:24:00.322 [2024-07-15 20:37:52.583273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583305] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583319] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f74b40 is same with the state(5) to be set 00:24:00.322 [2024-07-15 20:37:52.583349] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f99e40 (9): Bad file descriptor 00:24:00.322 [2024-07-15 20:37:52.583362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bfe0 (9): Bad file descriptor 00:24:00.322 [2024-07-15 20:37:52.583377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fafd0 (9): Bad file descriptor 00:24:00.322 [2024-07-15 20:37:52.583390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2f5d0 (9): Bad file descriptor 00:24:00.322 [2024-07-15 20:37:52.583413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.322 [2024-07-15 20:37:52.583459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.322 [2024-07-15 20:37:52.583468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.583475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86400 is same with the state(5) to be set 00:24:00.323 [2024-07-15 20:37:52.583487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a32610 (9): Bad file descriptor 00:24:00.323 [2024-07-15 20:37:52.583511] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.323 [2024-07-15 20:37:52.583519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.583526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.323 [2024-07-15 20:37:52.583533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.583541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.323 [2024-07-15 20:37:52.583548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.583555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:00.323 [2024-07-15 20:37:52.583562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.583569] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa4120 is same with the state(5) to be set 00:24:00.323 [2024-07-15 20:37:52.584862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.584878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.584892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.584901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.584913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.584921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.584932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.584941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.584952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.584960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.584971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.584979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.584990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.323 [2024-07-15 20:37:52.585461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.323 [2024-07-15 20:37:52.585468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.585939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.585947] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20301d0 is same with the state(5) to be set 00:24:00.324 [2024-07-15 20:37:52.585985] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20301d0 was disconnected and freed. reset controller. 00:24:00.324 [2024-07-15 20:37:52.587362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:00.324 [2024-07-15 20:37:52.588856] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:00.324 [2024-07-15 20:37:52.588880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:00.324 [2024-07-15 20:37:52.588901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86400 (9): Bad file descriptor 00:24:00.324 [2024-07-15 20:37:52.589294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.324 [2024-07-15 20:37:52.589319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fafd0 with addr=10.0.0.2, port=4420 00:24:00.324 [2024-07-15 20:37:52.589328] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fafd0 is same with the state(5) to be set 00:24:00.324 [2024-07-15 20:37:52.591249] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.324 [2024-07-15 20:37:52.591615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.324 [2024-07-15 20:37:52.591630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f51bc0 with addr=10.0.0.2, port=4420 00:24:00.324 [2024-07-15 20:37:52.591638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bc0 is same with the state(5) to be set 00:24:00.324 [2024-07-15 20:37:52.591664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fafd0 (9): Bad file descriptor 00:24:00.324 [2024-07-15 20:37:52.591704] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.324 [2024-07-15 20:37:52.591753] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.324 [2024-07-15 20:37:52.592039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.592051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.592063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.592071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.592079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2a1f0 is same with the state(5) to be set 00:24:00.324 [2024-07-15 20:37:52.592124] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f2a1f0 was disconnected and freed. reset controller. 00:24:00.324 [2024-07-15 20:37:52.592189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.592199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.324 [2024-07-15 20:37:52.592211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.324 [2024-07-15 20:37:52.592218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.592228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.592243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.592254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.592262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.592270] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b7af0 is same with the state(5) to be set 00:24:00.325 [2024-07-15 20:37:52.592311] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x20b7af0 was disconnected and freed. reset controller. 00:24:00.325 [2024-07-15 20:37:52.592350] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:00.325 [2024-07-15 20:37:52.592576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.325 [2024-07-15 20:37:52.592590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86400 with addr=10.0.0.2, port=4420 00:24:00.325 [2024-07-15 20:37:52.592597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86400 is same with the state(5) to be set 00:24:00.325 [2024-07-15 20:37:52.592607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f51bc0 (9): Bad file descriptor 00:24:00.325 [2024-07-15 20:37:52.592617] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:00.325 [2024-07-15 20:37:52.592624] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:00.325 [2024-07-15 20:37:52.592633] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:00.325 [2024-07-15 20:37:52.594534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.325 [2024-07-15 20:37:52.594549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:00.325 [2024-07-15 20:37:52.594561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:00.325 [2024-07-15 20:37:52.594572] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74b40 (9): Bad file descriptor 00:24:00.325 [2024-07-15 20:37:52.594590] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86400 (9): Bad file descriptor 00:24:00.325 [2024-07-15 20:37:52.594599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:00.325 [2024-07-15 20:37:52.594605] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:00.325 [2024-07-15 20:37:52.594613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:00.325 [2024-07-15 20:37:52.594656] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa4120 (9): Bad file descriptor 00:24:00.325 [2024-07-15 20:37:52.594712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.594989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.594996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.325 [2024-07-15 20:37:52.595327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.325 [2024-07-15 20:37:52.595341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.595984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.595991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.596003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.326 [2024-07-15 20:37:52.596011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.326 [2024-07-15 20:37:52.596916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2b600 is same with the state(5) to be set 00:24:00.326 [2024-07-15 20:37:52.596961] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f2b600 was disconnected and freed. reset controller. 00:24:00.326 [2024-07-15 20:37:52.596969] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.327 [2024-07-15 20:37:52.597001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.327 [2024-07-15 20:37:52.597375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.327 [2024-07-15 20:37:52.597390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f74da0 with addr=10.0.0.2, port=4420 00:24:00.327 [2024-07-15 20:37:52.597399] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f74da0 is same with the state(5) to be set 00:24:00.327 [2024-07-15 20:37:52.597416] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:00.327 [2024-07-15 20:37:52.597423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:00.327 [2024-07-15 20:37:52.597432] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:00.327 [2024-07-15 20:37:52.597488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.597984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.597993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.598000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.598009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.598016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.598025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.598033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.598042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.598049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.598059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.598066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.598075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.598082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.327 [2024-07-15 20:37:52.598091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.327 [2024-07-15 20:37:52.598098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.598550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.598558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202c4b0 is same with the state(5) to be set 00:24:00.328 [2024-07-15 20:37:52.599820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.599987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.599996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.600003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.328 [2024-07-15 20:37:52.600012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.328 [2024-07-15 20:37:52.600019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.329 [2024-07-15 20:37:52.600694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.329 [2024-07-15 20:37:52.600703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.600710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.600719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.600726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.600735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.600743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.600752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.609205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.609219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x202ecb0 is same with the state(5) to be set 00:24:00.330 [2024-07-15 20:37:52.611117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.330 [2024-07-15 20:37:52.611490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.330 [2024-07-15 20:37:52.611500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.611989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.611996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.331 [2024-07-15 20:37:52.612180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.331 [2024-07-15 20:37:52.612191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.612198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.612207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.612215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.612224] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2037670 is same with the state(5) to be set 00:24:00.332 [2024-07-15 20:37:52.613805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.332 [2024-07-15 20:37:52.613828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:00.332 [2024-07-15 20:37:52.613841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:24:00.332 [2024-07-15 20:37:52.613851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:24:00.332 [2024-07-15 20:37:52.613862] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:24:00.332 [2024-07-15 20:37:52.614460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.332 [2024-07-15 20:37:52.614499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f74b40 with addr=10.0.0.2, port=4420 00:24:00.332 [2024-07-15 20:37:52.614512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f74b40 is same with the state(5) to be set 00:24:00.332 [2024-07-15 20:37:52.614530] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74da0 (9): Bad file descriptor 00:24:00.332 [2024-07-15 20:37:52.614593] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.332 [2024-07-15 20:37:52.614606] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74b40 (9): Bad file descriptor 00:24:00.332 [2024-07-15 20:37:52.615431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.332 [2024-07-15 20:37:52.615468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f2f5d0 with addr=10.0.0.2, port=4420 00:24:00.332 [2024-07-15 20:37:52.615480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f2f5d0 is same with the state(5) to be set 00:24:00.332 [2024-07-15 20:37:52.615835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.332 [2024-07-15 20:37:52.615847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b1bfe0 with addr=10.0.0.2, port=4420 00:24:00.332 [2024-07-15 20:37:52.615854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b1bfe0 is same with the state(5) to be set 00:24:00.332 [2024-07-15 20:37:52.616057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.332 [2024-07-15 20:37:52.616071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a32610 with addr=10.0.0.2, port=4420 00:24:00.332 [2024-07-15 20:37:52.616079] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a32610 is same with the state(5) to be set 00:24:00.332 [2024-07-15 20:37:52.616529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.332 [2024-07-15 20:37:52.616567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f99e40 with addr=10.0.0.2, port=4420 00:24:00.332 [2024-07-15 20:37:52.616578] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f99e40 is same with the state(5) to be set 00:24:00.332 [2024-07-15 20:37:52.616592] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:00.332 [2024-07-15 20:37:52.616599] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:00.332 [2024-07-15 20:37:52.616612] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:00.332 [2024-07-15 20:37:52.617207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.332 [2024-07-15 20:37:52.617660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.332 [2024-07-15 20:37:52.617669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.617986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.617996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:00.333 [2024-07-15 20:37:52.618315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:00.333 [2024-07-15 20:37:52.618324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20b8f80 is same with the state(5) to be set 00:24:00.333 [2024-07-15 20:37:52.620590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:24:00.333 [2024-07-15 20:37:52.620614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:24:00.333 [2024-07-15 20:37:52.620624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:24:00.334 [2024-07-15 20:37:52.620635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.334 task offset: 18304 on job bdev=Nvme2n1 fails 00:24:00.334 00:24:00.334 Latency(us) 00:24:00.334 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.334 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme1n1 ended in about 0.86 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme1n1 : 0.86 148.64 9.29 74.32 0.00 283515.73 21954.56 244667.73 00:24:00.334 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme2n1 ended in about 0.85 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme2n1 : 0.85 151.26 9.45 75.63 0.00 272025.88 21626.88 283115.52 00:24:00.334 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme3n1 ended in about 0.87 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme3n1 : 0.87 154.85 9.68 73.41 0.00 264476.73 24029.87 241172.48 00:24:00.334 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme4n1 ended in about 0.85 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme4n1 : 0.85 225.86 14.12 75.29 0.00 195270.51 6580.91 251658.24 00:24:00.334 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme5n1 ended in about 0.86 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme5n1 : 0.86 222.22 13.89 2.34 0.00 254821.26 59419.31 210589.01 00:24:00.334 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme6n1 ended in about 0.86 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme6n1 : 0.86 149.13 9.32 74.57 0.00 250207.86 14527.15 258648.75 00:24:00.334 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme7n1 ended in about 0.86 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme7n1 : 0.86 219.65 13.73 4.67 0.00 242281.81 23592.96 220200.96 00:24:00.334 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme8n1 ended in about 0.88 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme8n1 : 0.88 145.31 9.08 72.65 0.00 245023.29 18677.76 253405.87 00:24:00.334 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme9n1 ended in about 0.85 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme9n1 : 0.85 226.22 14.14 75.41 0.00 170789.12 25886.72 222822.40 00:24:00.334 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:00.334 Job: Nvme10n1 ended in about 0.87 seconds with error 00:24:00.334 Verification LBA range: start 0x0 length 0x400 00:24:00.334 Nvme10n1 : 0.87 150.89 9.43 73.16 0.00 225339.63 11632.64 248162.99 00:24:00.334 =================================================================================================================== 00:24:00.334 Total : 1794.03 112.13 601.44 0.00 236862.97 6580.91 283115.52 00:24:00.334 [2024-07-15 20:37:52.649271] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:00.334 [2024-07-15 20:37:52.649303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:24:00.334 [2024-07-15 20:37:52.649362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2f5d0 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.649376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1bfe0 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.649386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a32610 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.649395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f99e40 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.649404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.649411] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.649421] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:00.334 [2024-07-15 20:37:52.649530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.334 [2024-07-15 20:37:52.650001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.334 [2024-07-15 20:37:52.650017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x20fafd0 with addr=10.0.0.2, port=4420 00:24:00.334 [2024-07-15 20:37:52.650026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20fafd0 is same with the state(5) to be set 00:24:00.334 [2024-07-15 20:37:52.650227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.334 [2024-07-15 20:37:52.650259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f51bc0 with addr=10.0.0.2, port=4420 00:24:00.334 [2024-07-15 20:37:52.650266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f51bc0 is same with the state(5) to be set 00:24:00.334 [2024-07-15 20:37:52.650612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.334 [2024-07-15 20:37:52.650623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f86400 with addr=10.0.0.2, port=4420 00:24:00.334 [2024-07-15 20:37:52.650630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f86400 is same with the state(5) to be set 00:24:00.334 [2024-07-15 20:37:52.650998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.334 [2024-07-15 20:37:52.651008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1fa4120 with addr=10.0.0.2, port=4420 00:24:00.334 [2024-07-15 20:37:52.651016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1fa4120 is same with the state(5) to be set 00:24:00.334 [2024-07-15 20:37:52.651023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651029] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:00.334 [2024-07-15 20:37:52.651047] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651058] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651065] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:24:00.334 [2024-07-15 20:37:52.651075] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:24:00.334 [2024-07-15 20:37:52.651100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651113] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:24:00.334 [2024-07-15 20:37:52.651145] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.334 [2024-07-15 20:37:52.651157] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.334 [2024-07-15 20:37:52.651167] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.334 [2024-07-15 20:37:52.651178] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:00.334 [2024-07-15 20:37:52.651501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.334 [2024-07-15 20:37:52.651513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.334 [2024-07-15 20:37:52.651520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.334 [2024-07-15 20:37:52.651526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.334 [2024-07-15 20:37:52.651542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20fafd0 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.651552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f51bc0 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.651562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f86400 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.651571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fa4120 (9): Bad file descriptor 00:24:00.334 [2024-07-15 20:37:52.651847] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:24:00.334 [2024-07-15 20:37:52.651860] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:24:00.334 [2024-07-15 20:37:52.651884] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651891] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:24:00.334 [2024-07-15 20:37:52.651908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:24:00.334 [2024-07-15 20:37:52.651931] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651948] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:24:00.334 [2024-07-15 20:37:52.651959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:24:00.334 [2024-07-15 20:37:52.651966] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:24:00.334 [2024-07-15 20:37:52.651972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:24:00.334 [2024-07-15 20:37:52.652004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.335 [2024-07-15 20:37:52.652012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.335 [2024-07-15 20:37:52.652018] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.335 [2024-07-15 20:37:52.652025] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.335 [2024-07-15 20:37:52.652425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.335 [2024-07-15 20:37:52.652437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f74da0 with addr=10.0.0.2, port=4420 00:24:00.335 [2024-07-15 20:37:52.652445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f74da0 is same with the state(5) to be set 00:24:00.335 [2024-07-15 20:37:52.652653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:00.335 [2024-07-15 20:37:52.652663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f74b40 with addr=10.0.0.2, port=4420 00:24:00.335 [2024-07-15 20:37:52.652671] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f74b40 is same with the state(5) to be set 00:24:00.335 [2024-07-15 20:37:52.652701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74da0 (9): Bad file descriptor 00:24:00.335 [2024-07-15 20:37:52.652713] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f74b40 (9): Bad file descriptor 00:24:00.335 [2024-07-15 20:37:52.652739] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:24:00.335 [2024-07-15 20:37:52.652746] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:24:00.335 [2024-07-15 20:37:52.652754] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:24:00.335 [2024-07-15 20:37:52.652764] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:24:00.335 [2024-07-15 20:37:52.652770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:24:00.335 [2024-07-15 20:37:52.652777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:24:00.335 [2024-07-15 20:37:52.652805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.335 [2024-07-15 20:37:52.652812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:00.594 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:24:00.595 20:37:52 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 1418208 00:24:01.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (1418208) - No such process 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:01.533 rmmod nvme_tcp 00:24:01.533 rmmod nvme_fabrics 00:24:01.533 rmmod nvme_keyring 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:01.533 20:37:53 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.134 20:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.134 00:24:04.134 real 0m7.808s 00:24:04.134 user 0m19.290s 00:24:04.134 sys 0m1.214s 00:24:04.134 20:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.134 20:37:55 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:04.134 ************************************ 00:24:04.134 END TEST nvmf_shutdown_tc3 00:24:04.134 ************************************ 00:24:04.134 20:37:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:24:04.134 20:37:56 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:24:04.134 00:24:04.134 real 0m32.781s 00:24:04.134 user 1m14.037s 00:24:04.134 sys 0m9.839s 00:24:04.134 20:37:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.134 20:37:56 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:04.134 ************************************ 00:24:04.134 END TEST nvmf_shutdown 00:24:04.134 ************************************ 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:04.134 20:37:56 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.134 20:37:56 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.134 20:37:56 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:24:04.134 20:37:56 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.134 20:37:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:04.134 ************************************ 00:24:04.134 START TEST nvmf_multicontroller 00:24:04.134 ************************************ 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:04.134 * Looking for test storage... 00:24:04.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.134 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.135 20:37:56 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:24:12.271 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:12.272 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:12.272 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:12.272 Found net devices under 0000:31:00.0: cvl_0_0 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:12.272 Found net devices under 0000:31:00.1: cvl_0_1 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:12.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:12.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.588 ms 00:24:12.272 00:24:12.272 --- 10.0.0.2 ping statistics --- 00:24:12.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.272 rtt min/avg/max/mdev = 0.588/0.588/0.588/0.000 ms 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:12.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:12.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.273 ms 00:24:12.272 00:24:12.272 --- 10.0.0.1 ping statistics --- 00:24:12.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:12.272 rtt min/avg/max/mdev = 0.273/0.273/0.273/0.000 ms 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=1423612 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 1423612 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1423612 ']' 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:12.272 20:38:04 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:12.272 [2024-07-15 20:38:04.559895] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:24:12.272 [2024-07-15 20:38:04.559944] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.272 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.533 [2024-07-15 20:38:04.653259] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:12.533 [2024-07-15 20:38:04.737774] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:12.533 [2024-07-15 20:38:04.737831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:12.533 [2024-07-15 20:38:04.737839] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:12.533 [2024-07-15 20:38:04.737846] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:12.533 [2024-07-15 20:38:04.737852] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:12.533 [2024-07-15 20:38:04.737993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:12.533 [2024-07-15 20:38:04.738155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:12.533 [2024-07-15 20:38:04.738155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 [2024-07-15 20:38:05.381911] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 Malloc0 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 [2024-07-15 20:38:05.445596] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 [2024-07-15 20:38:05.457542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.103 Malloc1 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.103 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1423963 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1423963 /var/tmp/bdevperf.sock 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 1423963 ']' 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:13.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:13.363 20:38:05 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.304 NVMe0n1 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.304 1 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:14.304 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.305 request: 00:24:14.305 { 00:24:14.305 "name": "NVMe0", 00:24:14.305 "trtype": "tcp", 00:24:14.305 "traddr": "10.0.0.2", 00:24:14.305 "adrfam": "ipv4", 00:24:14.305 "trsvcid": "4420", 00:24:14.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.305 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:14.305 "hostaddr": "10.0.0.2", 00:24:14.305 "hostsvcid": "60000", 00:24:14.305 "prchk_reftag": false, 00:24:14.305 "prchk_guard": false, 00:24:14.305 "hdgst": false, 00:24:14.305 "ddgst": false, 00:24:14.305 "method": "bdev_nvme_attach_controller", 00:24:14.305 "req_id": 1 00:24:14.305 } 00:24:14.305 Got JSON-RPC error response 00:24:14.305 response: 00:24:14.305 { 00:24:14.305 "code": -114, 00:24:14.305 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:14.305 } 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.305 request: 00:24:14.305 { 00:24:14.305 "name": "NVMe0", 00:24:14.305 "trtype": "tcp", 00:24:14.305 "traddr": "10.0.0.2", 00:24:14.305 "adrfam": "ipv4", 00:24:14.305 "trsvcid": "4420", 00:24:14.305 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:14.305 "hostaddr": "10.0.0.2", 00:24:14.305 "hostsvcid": "60000", 00:24:14.305 "prchk_reftag": false, 00:24:14.305 "prchk_guard": false, 00:24:14.305 "hdgst": false, 00:24:14.305 "ddgst": false, 00:24:14.305 "method": "bdev_nvme_attach_controller", 00:24:14.305 "req_id": 1 00:24:14.305 } 00:24:14.305 Got JSON-RPC error response 00:24:14.305 response: 00:24:14.305 { 00:24:14.305 "code": -114, 00:24:14.305 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:14.305 } 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.305 request: 00:24:14.305 { 00:24:14.305 "name": "NVMe0", 00:24:14.305 "trtype": "tcp", 00:24:14.305 "traddr": "10.0.0.2", 00:24:14.305 "adrfam": "ipv4", 00:24:14.305 "trsvcid": "4420", 00:24:14.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.305 "hostaddr": "10.0.0.2", 00:24:14.305 "hostsvcid": "60000", 00:24:14.305 "prchk_reftag": false, 00:24:14.305 "prchk_guard": false, 00:24:14.305 "hdgst": false, 00:24:14.305 "ddgst": false, 00:24:14.305 "multipath": "disable", 00:24:14.305 "method": "bdev_nvme_attach_controller", 00:24:14.305 "req_id": 1 00:24:14.305 } 00:24:14.305 Got JSON-RPC error response 00:24:14.305 response: 00:24:14.305 { 00:24:14.305 "code": -114, 00:24:14.305 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:24:14.305 } 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.305 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.305 request: 00:24:14.305 { 00:24:14.305 "name": "NVMe0", 00:24:14.305 "trtype": "tcp", 00:24:14.305 "traddr": "10.0.0.2", 00:24:14.305 "adrfam": "ipv4", 00:24:14.305 "trsvcid": "4420", 00:24:14.305 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:14.305 "hostaddr": "10.0.0.2", 00:24:14.305 "hostsvcid": "60000", 00:24:14.305 "prchk_reftag": false, 00:24:14.305 "prchk_guard": false, 00:24:14.306 "hdgst": false, 00:24:14.306 "ddgst": false, 00:24:14.306 "multipath": "failover", 00:24:14.306 "method": "bdev_nvme_attach_controller", 00:24:14.306 "req_id": 1 00:24:14.306 } 00:24:14.306 Got JSON-RPC error response 00:24:14.306 response: 00:24:14.306 { 00:24:14.566 "code": -114, 00:24:14.566 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:24:14.566 } 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.566 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.566 20:38:06 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.826 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:14.826 20:38:07 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:15.765 0 00:24:15.765 20:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:15.765 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:15.765 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 1423963 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1423963 ']' 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1423963 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1423963 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1423963' 00:24:16.025 killing process with pid 1423963 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1423963 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1423963 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:24:16.025 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:24:16.025 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:16.025 [2024-07-15 20:38:05.576662] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:24:16.025 [2024-07-15 20:38:05.576716] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1423963 ] 00:24:16.025 EAL: No free 2048 kB hugepages reported on node 1 00:24:16.025 [2024-07-15 20:38:05.642669] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:16.025 [2024-07-15 20:38:05.706874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:16.025 [2024-07-15 20:38:07.006837] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name c945a77b-545a-45b1-9b80-e829985180f6 already exists 00:24:16.025 [2024-07-15 20:38:07.006870] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:c945a77b-545a-45b1-9b80-e829985180f6 alias for bdev NVMe1n1 00:24:16.025 [2024-07-15 20:38:07.006878] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:16.025 Running I/O for 1 seconds... 00:24:16.025 00:24:16.025 Latency(us) 00:24:16.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.025 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:16.025 NVMe0n1 : 1.00 29319.18 114.53 0.00 0.00 4355.27 2170.88 10485.76 00:24:16.025 =================================================================================================================== 00:24:16.026 Total : 29319.18 114.53 0.00 0.00 4355.27 2170.88 10485.76 00:24:16.026 Received shutdown signal, test time was about 1.000000 seconds 00:24:16.026 00:24:16.026 Latency(us) 00:24:16.026 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.026 =================================================================================================================== 00:24:16.026 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:16.026 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:16.026 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:16.026 rmmod nvme_tcp 00:24:16.286 rmmod nvme_fabrics 00:24:16.286 rmmod nvme_keyring 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 1423612 ']' 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 1423612 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 1423612 ']' 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 1423612 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1423612 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1423612' 00:24:16.286 killing process with pid 1423612 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 1423612 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 1423612 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:16.286 20:38:08 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.825 20:38:10 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:18.825 00:24:18.825 real 0m14.589s 00:24:18.825 user 0m17.189s 00:24:18.825 sys 0m6.832s 00:24:18.825 20:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:18.825 20:38:10 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:18.825 ************************************ 00:24:18.825 END TEST nvmf_multicontroller 00:24:18.825 ************************************ 00:24:18.825 20:38:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:18.825 20:38:10 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:18.825 20:38:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:18.825 20:38:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:18.825 20:38:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:18.825 ************************************ 00:24:18.825 START TEST nvmf_aer 00:24:18.825 ************************************ 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:18.825 * Looking for test storage... 00:24:18.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:24:18.825 20:38:10 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:26.960 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:26.961 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:26.961 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:26.961 Found net devices under 0000:31:00.0: cvl_0_0 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:26.961 Found net devices under 0000:31:00.1: cvl_0_1 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:26.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:26.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.466 ms 00:24:26.961 00:24:26.961 --- 10.0.0.2 ping statistics --- 00:24:26.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.961 rtt min/avg/max/mdev = 0.466/0.466/0.466/0.000 ms 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:26.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:26.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.281 ms 00:24:26.961 00:24:26.961 --- 10.0.0.1 ping statistics --- 00:24:26.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:26.961 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=1429000 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 1429000 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 1429000 ']' 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:26.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:26.961 20:38:18 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:26.961 [2024-07-15 20:38:18.615935] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:24:26.961 [2024-07-15 20:38:18.615991] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:26.961 EAL: No free 2048 kB hugepages reported on node 1 00:24:26.961 [2024-07-15 20:38:18.694185] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:26.962 [2024-07-15 20:38:18.766708] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:26.962 [2024-07-15 20:38:18.766748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:26.962 [2024-07-15 20:38:18.766756] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:26.962 [2024-07-15 20:38:18.766763] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:26.962 [2024-07-15 20:38:18.766768] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:26.962 [2024-07-15 20:38:18.766907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:26.962 [2024-07-15 20:38:18.767020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:26.962 [2024-07-15 20:38:18.767175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.962 [2024-07-15 20:38:18.767176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.221 [2024-07-15 20:38:19.433794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.221 Malloc0 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.221 [2024-07-15 20:38:19.493285] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.221 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.222 [ 00:24:27.222 { 00:24:27.222 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:27.222 "subtype": "Discovery", 00:24:27.222 "listen_addresses": [], 00:24:27.222 "allow_any_host": true, 00:24:27.222 "hosts": [] 00:24:27.222 }, 00:24:27.222 { 00:24:27.222 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.222 "subtype": "NVMe", 00:24:27.222 "listen_addresses": [ 00:24:27.222 { 00:24:27.222 "trtype": "TCP", 00:24:27.222 "adrfam": "IPv4", 00:24:27.222 "traddr": "10.0.0.2", 00:24:27.222 "trsvcid": "4420" 00:24:27.222 } 00:24:27.222 ], 00:24:27.222 "allow_any_host": true, 00:24:27.222 "hosts": [], 00:24:27.222 "serial_number": "SPDK00000000000001", 00:24:27.222 "model_number": "SPDK bdev Controller", 00:24:27.222 "max_namespaces": 2, 00:24:27.222 "min_cntlid": 1, 00:24:27.222 "max_cntlid": 65519, 00:24:27.222 "namespaces": [ 00:24:27.222 { 00:24:27.222 "nsid": 1, 00:24:27.222 "bdev_name": "Malloc0", 00:24:27.222 "name": "Malloc0", 00:24:27.222 "nguid": "E63FA9758B0B4E3A9FACA5A632ACA81B", 00:24:27.222 "uuid": "e63fa975-8b0b-4e3a-9fac-a5a632aca81b" 00:24:27.222 } 00:24:27.222 ] 00:24:27.222 } 00:24:27.222 ] 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=1429339 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:24:27.222 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:27.222 EAL: No free 2048 kB hugepages reported on node 1 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 Malloc1 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 Asynchronous Event Request test 00:24:27.482 Attaching to 10.0.0.2 00:24:27.482 Attached to 10.0.0.2 00:24:27.482 Registering asynchronous event callbacks... 00:24:27.482 Starting namespace attribute notice tests for all controllers... 00:24:27.482 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:27.482 aer_cb - Changed Namespace 00:24:27.482 Cleaning up... 00:24:27.482 [ 00:24:27.482 { 00:24:27.482 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:27.482 "subtype": "Discovery", 00:24:27.482 "listen_addresses": [], 00:24:27.482 "allow_any_host": true, 00:24:27.482 "hosts": [] 00:24:27.482 }, 00:24:27.482 { 00:24:27.482 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:27.482 "subtype": "NVMe", 00:24:27.482 "listen_addresses": [ 00:24:27.482 { 00:24:27.482 "trtype": "TCP", 00:24:27.482 "adrfam": "IPv4", 00:24:27.482 "traddr": "10.0.0.2", 00:24:27.482 "trsvcid": "4420" 00:24:27.482 } 00:24:27.482 ], 00:24:27.482 "allow_any_host": true, 00:24:27.482 "hosts": [], 00:24:27.482 "serial_number": "SPDK00000000000001", 00:24:27.482 "model_number": "SPDK bdev Controller", 00:24:27.482 "max_namespaces": 2, 00:24:27.482 "min_cntlid": 1, 00:24:27.482 "max_cntlid": 65519, 00:24:27.482 "namespaces": [ 00:24:27.482 { 00:24:27.482 "nsid": 1, 00:24:27.482 "bdev_name": "Malloc0", 00:24:27.482 "name": "Malloc0", 00:24:27.482 "nguid": "E63FA9758B0B4E3A9FACA5A632ACA81B", 00:24:27.482 "uuid": "e63fa975-8b0b-4e3a-9fac-a5a632aca81b" 00:24:27.482 }, 00:24:27.482 { 00:24:27.482 "nsid": 2, 00:24:27.482 "bdev_name": "Malloc1", 00:24:27.482 "name": "Malloc1", 00:24:27.482 "nguid": "D234CA9C0F9C41D398DA4DB8E2BFC9CA", 00:24:27.482 "uuid": "d234ca9c-0f9c-41d3-98da-4db8e2bfc9ca" 00:24:27.482 } 00:24:27.482 ] 00:24:27.482 } 00:24:27.482 ] 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 1429339 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:27.482 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:27.483 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:27.483 rmmod nvme_tcp 00:24:27.744 rmmod nvme_fabrics 00:24:27.744 rmmod nvme_keyring 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 1429000 ']' 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 1429000 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 1429000 ']' 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 1429000 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1429000 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1429000' 00:24:27.744 killing process with pid 1429000 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 1429000 00:24:27.744 20:38:19 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 1429000 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.744 20:38:20 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.299 20:38:22 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.299 00:24:30.299 real 0m11.372s 00:24:30.299 user 0m7.426s 00:24:30.299 sys 0m6.155s 00:24:30.299 20:38:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:30.299 20:38:22 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:30.299 ************************************ 00:24:30.299 END TEST nvmf_aer 00:24:30.299 ************************************ 00:24:30.299 20:38:22 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:30.300 20:38:22 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:30.300 20:38:22 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:30.300 20:38:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:30.300 20:38:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:30.300 ************************************ 00:24:30.300 START TEST nvmf_async_init 00:24:30.300 ************************************ 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:30.300 * Looking for test storage... 00:24:30.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dfdf1cae75c046caab6e29912ea77fdc 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:24:30.300 20:38:22 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:38.439 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:38.439 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.439 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:38.440 Found net devices under 0000:31:00.0: cvl_0_0 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:38.440 Found net devices under 0000:31:00.1: cvl_0_1 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:38.440 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:38.440 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.951 ms 00:24:38.440 00:24:38.440 --- 10.0.0.2 ping statistics --- 00:24:38.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.440 rtt min/avg/max/mdev = 0.951/0.951/0.951/0.000 ms 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:38.440 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:38.440 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.339 ms 00:24:38.440 00:24:38.440 --- 10.0.0.1 ping statistics --- 00:24:38.440 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:38.440 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=1434025 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 1434025 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 1434025 ']' 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:38.440 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:38.440 20:38:30 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:38.440 [2024-07-15 20:38:30.503465] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:24:38.440 [2024-07-15 20:38:30.503558] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:38.440 EAL: No free 2048 kB hugepages reported on node 1 00:24:38.440 [2024-07-15 20:38:30.586423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.440 [2024-07-15 20:38:30.658889] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:38.440 [2024-07-15 20:38:30.658929] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:38.440 [2024-07-15 20:38:30.658937] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:38.440 [2024-07-15 20:38:30.658943] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:38.440 [2024-07-15 20:38:30.658949] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:38.440 [2024-07-15 20:38:30.658967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.012 [2024-07-15 20:38:31.309756] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.012 null0 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.012 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dfdf1cae75c046caab6e29912ea77fdc 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.013 [2024-07-15 20:38:31.366000] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.013 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.274 nvme0n1 00:24:39.274 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.274 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:39.274 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.274 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.274 [ 00:24:39.274 { 00:24:39.275 "name": "nvme0n1", 00:24:39.275 "aliases": [ 00:24:39.275 "dfdf1cae-75c0-46ca-ab6e-29912ea77fdc" 00:24:39.275 ], 00:24:39.275 "product_name": "NVMe disk", 00:24:39.275 "block_size": 512, 00:24:39.275 "num_blocks": 2097152, 00:24:39.275 "uuid": "dfdf1cae-75c0-46ca-ab6e-29912ea77fdc", 00:24:39.275 "assigned_rate_limits": { 00:24:39.275 "rw_ios_per_sec": 0, 00:24:39.275 "rw_mbytes_per_sec": 0, 00:24:39.275 "r_mbytes_per_sec": 0, 00:24:39.275 "w_mbytes_per_sec": 0 00:24:39.275 }, 00:24:39.275 "claimed": false, 00:24:39.275 "zoned": false, 00:24:39.275 "supported_io_types": { 00:24:39.275 "read": true, 00:24:39.275 "write": true, 00:24:39.275 "unmap": false, 00:24:39.275 "flush": true, 00:24:39.275 "reset": true, 00:24:39.275 "nvme_admin": true, 00:24:39.275 "nvme_io": true, 00:24:39.275 "nvme_io_md": false, 00:24:39.275 "write_zeroes": true, 00:24:39.275 "zcopy": false, 00:24:39.275 "get_zone_info": false, 00:24:39.275 "zone_management": false, 00:24:39.275 "zone_append": false, 00:24:39.275 "compare": true, 00:24:39.275 "compare_and_write": true, 00:24:39.275 "abort": true, 00:24:39.275 "seek_hole": false, 00:24:39.275 "seek_data": false, 00:24:39.275 "copy": true, 00:24:39.275 "nvme_iov_md": false 00:24:39.275 }, 00:24:39.275 "memory_domains": [ 00:24:39.275 { 00:24:39.275 "dma_device_id": "system", 00:24:39.275 "dma_device_type": 1 00:24:39.275 } 00:24:39.275 ], 00:24:39.275 "driver_specific": { 00:24:39.275 "nvme": [ 00:24:39.275 { 00:24:39.275 "trid": { 00:24:39.275 "trtype": "TCP", 00:24:39.275 "adrfam": "IPv4", 00:24:39.275 "traddr": "10.0.0.2", 00:24:39.275 "trsvcid": "4420", 00:24:39.275 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:39.275 }, 00:24:39.275 "ctrlr_data": { 00:24:39.275 "cntlid": 1, 00:24:39.275 "vendor_id": "0x8086", 00:24:39.275 "model_number": "SPDK bdev Controller", 00:24:39.275 "serial_number": "00000000000000000000", 00:24:39.275 "firmware_revision": "24.09", 00:24:39.275 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.275 "oacs": { 00:24:39.275 "security": 0, 00:24:39.275 "format": 0, 00:24:39.275 "firmware": 0, 00:24:39.275 "ns_manage": 0 00:24:39.275 }, 00:24:39.275 "multi_ctrlr": true, 00:24:39.275 "ana_reporting": false 00:24:39.275 }, 00:24:39.275 "vs": { 00:24:39.275 "nvme_version": "1.3" 00:24:39.275 }, 00:24:39.275 "ns_data": { 00:24:39.275 "id": 1, 00:24:39.275 "can_share": true 00:24:39.275 } 00:24:39.275 } 00:24:39.275 ], 00:24:39.275 "mp_policy": "active_passive" 00:24:39.275 } 00:24:39.275 } 00:24:39.275 ] 00:24:39.275 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.275 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:24:39.275 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.275 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.275 [2024-07-15 20:38:31.634570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:39.275 [2024-07-15 20:38:31.634631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x265f9f0 (9): Bad file descriptor 00:24:39.536 [2024-07-15 20:38:31.766328] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.536 [ 00:24:39.536 { 00:24:39.536 "name": "nvme0n1", 00:24:39.536 "aliases": [ 00:24:39.536 "dfdf1cae-75c0-46ca-ab6e-29912ea77fdc" 00:24:39.536 ], 00:24:39.536 "product_name": "NVMe disk", 00:24:39.536 "block_size": 512, 00:24:39.536 "num_blocks": 2097152, 00:24:39.536 "uuid": "dfdf1cae-75c0-46ca-ab6e-29912ea77fdc", 00:24:39.536 "assigned_rate_limits": { 00:24:39.536 "rw_ios_per_sec": 0, 00:24:39.536 "rw_mbytes_per_sec": 0, 00:24:39.536 "r_mbytes_per_sec": 0, 00:24:39.536 "w_mbytes_per_sec": 0 00:24:39.536 }, 00:24:39.536 "claimed": false, 00:24:39.536 "zoned": false, 00:24:39.536 "supported_io_types": { 00:24:39.536 "read": true, 00:24:39.536 "write": true, 00:24:39.536 "unmap": false, 00:24:39.536 "flush": true, 00:24:39.536 "reset": true, 00:24:39.536 "nvme_admin": true, 00:24:39.536 "nvme_io": true, 00:24:39.536 "nvme_io_md": false, 00:24:39.536 "write_zeroes": true, 00:24:39.536 "zcopy": false, 00:24:39.536 "get_zone_info": false, 00:24:39.536 "zone_management": false, 00:24:39.536 "zone_append": false, 00:24:39.536 "compare": true, 00:24:39.536 "compare_and_write": true, 00:24:39.536 "abort": true, 00:24:39.536 "seek_hole": false, 00:24:39.536 "seek_data": false, 00:24:39.536 "copy": true, 00:24:39.536 "nvme_iov_md": false 00:24:39.536 }, 00:24:39.536 "memory_domains": [ 00:24:39.536 { 00:24:39.536 "dma_device_id": "system", 00:24:39.536 "dma_device_type": 1 00:24:39.536 } 00:24:39.536 ], 00:24:39.536 "driver_specific": { 00:24:39.536 "nvme": [ 00:24:39.536 { 00:24:39.536 "trid": { 00:24:39.536 "trtype": "TCP", 00:24:39.536 "adrfam": "IPv4", 00:24:39.536 "traddr": "10.0.0.2", 00:24:39.536 "trsvcid": "4420", 00:24:39.536 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:39.536 }, 00:24:39.536 "ctrlr_data": { 00:24:39.536 "cntlid": 2, 00:24:39.536 "vendor_id": "0x8086", 00:24:39.536 "model_number": "SPDK bdev Controller", 00:24:39.536 "serial_number": "00000000000000000000", 00:24:39.536 "firmware_revision": "24.09", 00:24:39.536 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.536 "oacs": { 00:24:39.536 "security": 0, 00:24:39.536 "format": 0, 00:24:39.536 "firmware": 0, 00:24:39.536 "ns_manage": 0 00:24:39.536 }, 00:24:39.536 "multi_ctrlr": true, 00:24:39.536 "ana_reporting": false 00:24:39.536 }, 00:24:39.536 "vs": { 00:24:39.536 "nvme_version": "1.3" 00:24:39.536 }, 00:24:39.536 "ns_data": { 00:24:39.536 "id": 1, 00:24:39.536 "can_share": true 00:24:39.536 } 00:24:39.536 } 00:24:39.536 ], 00:24:39.536 "mp_policy": "active_passive" 00:24:39.536 } 00:24:39.536 } 00:24:39.536 ] 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.00KuOCdRhg 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.00KuOCdRhg 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.536 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.536 [2024-07-15 20:38:31.831197] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:39.537 [2024-07-15 20:38:31.831317] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.00KuOCdRhg 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.537 [2024-07-15 20:38:31.843223] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.00KuOCdRhg 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.537 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.537 [2024-07-15 20:38:31.855277] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:39.537 [2024-07-15 20:38:31.855320] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:39.797 nvme0n1 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.797 [ 00:24:39.797 { 00:24:39.797 "name": "nvme0n1", 00:24:39.797 "aliases": [ 00:24:39.797 "dfdf1cae-75c0-46ca-ab6e-29912ea77fdc" 00:24:39.797 ], 00:24:39.797 "product_name": "NVMe disk", 00:24:39.797 "block_size": 512, 00:24:39.797 "num_blocks": 2097152, 00:24:39.797 "uuid": "dfdf1cae-75c0-46ca-ab6e-29912ea77fdc", 00:24:39.797 "assigned_rate_limits": { 00:24:39.797 "rw_ios_per_sec": 0, 00:24:39.797 "rw_mbytes_per_sec": 0, 00:24:39.797 "r_mbytes_per_sec": 0, 00:24:39.797 "w_mbytes_per_sec": 0 00:24:39.797 }, 00:24:39.797 "claimed": false, 00:24:39.797 "zoned": false, 00:24:39.797 "supported_io_types": { 00:24:39.797 "read": true, 00:24:39.797 "write": true, 00:24:39.797 "unmap": false, 00:24:39.797 "flush": true, 00:24:39.797 "reset": true, 00:24:39.797 "nvme_admin": true, 00:24:39.797 "nvme_io": true, 00:24:39.797 "nvme_io_md": false, 00:24:39.797 "write_zeroes": true, 00:24:39.797 "zcopy": false, 00:24:39.797 "get_zone_info": false, 00:24:39.797 "zone_management": false, 00:24:39.797 "zone_append": false, 00:24:39.797 "compare": true, 00:24:39.797 "compare_and_write": true, 00:24:39.797 "abort": true, 00:24:39.797 "seek_hole": false, 00:24:39.797 "seek_data": false, 00:24:39.797 "copy": true, 00:24:39.797 "nvme_iov_md": false 00:24:39.797 }, 00:24:39.797 "memory_domains": [ 00:24:39.797 { 00:24:39.797 "dma_device_id": "system", 00:24:39.797 "dma_device_type": 1 00:24:39.797 } 00:24:39.797 ], 00:24:39.797 "driver_specific": { 00:24:39.797 "nvme": [ 00:24:39.797 { 00:24:39.797 "trid": { 00:24:39.797 "trtype": "TCP", 00:24:39.797 "adrfam": "IPv4", 00:24:39.797 "traddr": "10.0.0.2", 00:24:39.797 "trsvcid": "4421", 00:24:39.797 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:24:39.797 }, 00:24:39.797 "ctrlr_data": { 00:24:39.797 "cntlid": 3, 00:24:39.797 "vendor_id": "0x8086", 00:24:39.797 "model_number": "SPDK bdev Controller", 00:24:39.797 "serial_number": "00000000000000000000", 00:24:39.797 "firmware_revision": "24.09", 00:24:39.797 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:24:39.797 "oacs": { 00:24:39.797 "security": 0, 00:24:39.797 "format": 0, 00:24:39.797 "firmware": 0, 00:24:39.797 "ns_manage": 0 00:24:39.797 }, 00:24:39.797 "multi_ctrlr": true, 00:24:39.797 "ana_reporting": false 00:24:39.797 }, 00:24:39.797 "vs": { 00:24:39.797 "nvme_version": "1.3" 00:24:39.797 }, 00:24:39.797 "ns_data": { 00:24:39.797 "id": 1, 00:24:39.797 "can_share": true 00:24:39.797 } 00:24:39.797 } 00:24:39.797 ], 00:24:39.797 "mp_policy": "active_passive" 00:24:39.797 } 00:24:39.797 } 00:24:39.797 ] 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.00KuOCdRhg 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:39.797 20:38:31 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:39.797 rmmod nvme_tcp 00:24:39.797 rmmod nvme_fabrics 00:24:39.797 rmmod nvme_keyring 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 1434025 ']' 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 1434025 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 1434025 ']' 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 1434025 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1434025 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1434025' 00:24:39.797 killing process with pid 1434025 00:24:39.797 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 1434025 00:24:39.797 [2024-07-15 20:38:32.109944] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:39.798 [2024-07-15 20:38:32.109970] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:39.798 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 1434025 00:24:40.058 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:40.058 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:40.058 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:40.058 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:40.058 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:40.058 20:38:32 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:40.059 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:40.059 20:38:32 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:41.973 20:38:34 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:41.973 00:24:41.973 real 0m12.044s 00:24:41.973 user 0m4.236s 00:24:41.973 sys 0m6.241s 00:24:41.973 20:38:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:41.973 20:38:34 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:24:41.973 ************************************ 00:24:41.973 END TEST nvmf_async_init 00:24:41.973 ************************************ 00:24:41.973 20:38:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:41.973 20:38:34 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:41.973 20:38:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:41.973 20:38:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:41.973 20:38:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:42.234 ************************************ 00:24:42.234 START TEST dma 00:24:42.234 ************************************ 00:24:42.234 20:38:34 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:24:42.234 * Looking for test storage... 00:24:42.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.234 20:38:34 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.234 20:38:34 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.234 20:38:34 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.234 20:38:34 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.234 20:38:34 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.234 20:38:34 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.234 20:38:34 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.234 20:38:34 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:24:42.234 20:38:34 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.234 20:38:34 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.234 20:38:34 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:24:42.234 20:38:34 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:24:42.234 00:24:42.234 real 0m0.117s 00:24:42.234 user 0m0.045s 00:24:42.234 sys 0m0.078s 00:24:42.234 20:38:34 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:42.234 20:38:34 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:24:42.234 ************************************ 00:24:42.234 END TEST dma 00:24:42.234 ************************************ 00:24:42.234 20:38:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:42.234 20:38:34 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:42.234 20:38:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:42.234 20:38:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:42.234 20:38:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:42.234 ************************************ 00:24:42.234 START TEST nvmf_identify 00:24:42.234 ************************************ 00:24:42.234 20:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:24:42.495 * Looking for test storage... 00:24:42.496 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:24:42.496 20:38:34 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:50.639 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:24:50.640 Found 0000:31:00.0 (0x8086 - 0x159b) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:24:50.640 Found 0000:31:00.1 (0x8086 - 0x159b) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:24:50.640 Found net devices under 0000:31:00.0: cvl_0_0 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:24:50.640 Found net devices under 0000:31:00.1: cvl_0_1 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:50.640 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.640 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.654 ms 00:24:50.640 00:24:50.640 --- 10.0.0.2 ping statistics --- 00:24:50.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.640 rtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 ms 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.640 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.640 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.312 ms 00:24:50.640 00:24:50.640 --- 10.0.0.1 ping statistics --- 00:24:50.640 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.640 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1439094 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1439094 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1439094 ']' 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.640 20:38:42 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:50.640 [2024-07-15 20:38:42.856273] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:24:50.640 [2024-07-15 20:38:42.856324] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:50.640 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.640 [2024-07-15 20:38:42.934816] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:50.640 [2024-07-15 20:38:43.003677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:50.640 [2024-07-15 20:38:43.003716] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:50.640 [2024-07-15 20:38:43.003723] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:50.640 [2024-07-15 20:38:43.003730] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:50.640 [2024-07-15 20:38:43.003736] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:50.640 [2024-07-15 20:38:43.003881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:50.640 [2024-07-15 20:38:43.003993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.640 [2024-07-15 20:38:43.004148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:50.640 [2024-07-15 20:38:43.004149] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 [2024-07-15 20:38:43.640697] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 Malloc0 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 [2024-07-15 20:38:43.740291] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:51.585 [ 00:24:51.585 { 00:24:51.585 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:51.585 "subtype": "Discovery", 00:24:51.585 "listen_addresses": [ 00:24:51.585 { 00:24:51.585 "trtype": "TCP", 00:24:51.585 "adrfam": "IPv4", 00:24:51.585 "traddr": "10.0.0.2", 00:24:51.585 "trsvcid": "4420" 00:24:51.585 } 00:24:51.585 ], 00:24:51.585 "allow_any_host": true, 00:24:51.585 "hosts": [] 00:24:51.585 }, 00:24:51.585 { 00:24:51.585 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:51.585 "subtype": "NVMe", 00:24:51.585 "listen_addresses": [ 00:24:51.585 { 00:24:51.585 "trtype": "TCP", 00:24:51.585 "adrfam": "IPv4", 00:24:51.585 "traddr": "10.0.0.2", 00:24:51.585 "trsvcid": "4420" 00:24:51.585 } 00:24:51.585 ], 00:24:51.585 "allow_any_host": true, 00:24:51.585 "hosts": [], 00:24:51.585 "serial_number": "SPDK00000000000001", 00:24:51.585 "model_number": "SPDK bdev Controller", 00:24:51.585 "max_namespaces": 32, 00:24:51.585 "min_cntlid": 1, 00:24:51.585 "max_cntlid": 65519, 00:24:51.585 "namespaces": [ 00:24:51.585 { 00:24:51.585 "nsid": 1, 00:24:51.585 "bdev_name": "Malloc0", 00:24:51.585 "name": "Malloc0", 00:24:51.585 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:24:51.585 "eui64": "ABCDEF0123456789", 00:24:51.585 "uuid": "a1227534-845a-4204-b1c3-96fc94429345" 00:24:51.585 } 00:24:51.585 ] 00:24:51.585 } 00:24:51.585 ] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.585 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:24:51.585 [2024-07-15 20:38:43.800848] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:24:51.586 [2024-07-15 20:38:43.800892] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439180 ] 00:24:51.586 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.586 [2024-07-15 20:38:43.832925] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:24:51.586 [2024-07-15 20:38:43.832975] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:51.586 [2024-07-15 20:38:43.832980] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:51.586 [2024-07-15 20:38:43.832991] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:51.586 [2024-07-15 20:38:43.832997] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:51.586 [2024-07-15 20:38:43.836262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:24:51.586 [2024-07-15 20:38:43.836292] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x86cec0 0 00:24:51.586 [2024-07-15 20:38:43.844238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:51.586 [2024-07-15 20:38:43.844252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:51.586 [2024-07-15 20:38:43.844256] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:51.586 [2024-07-15 20:38:43.844259] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:51.586 [2024-07-15 20:38:43.844293] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.844298] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.844302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.844315] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:51.586 [2024-07-15 20:38:43.844330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.852243] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.852252] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.586 [2024-07-15 20:38:43.852256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.586 [2024-07-15 20:38:43.852272] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:51.586 [2024-07-15 20:38:43.852279] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:24:51.586 [2024-07-15 20:38:43.852284] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:24:51.586 [2024-07-15 20:38:43.852302] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852306] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.852318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.586 [2024-07-15 20:38:43.852330] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.852545] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.852552] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.586 [2024-07-15 20:38:43.852555] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852559] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.586 [2024-07-15 20:38:43.852566] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:24:51.586 [2024-07-15 20:38:43.852573] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:24:51.586 [2024-07-15 20:38:43.852580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852587] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852591] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.852598] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.586 [2024-07-15 20:38:43.852608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.852792] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.852799] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.586 [2024-07-15 20:38:43.852802] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852806] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.586 [2024-07-15 20:38:43.852811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:24:51.586 [2024-07-15 20:38:43.852819] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:24:51.586 [2024-07-15 20:38:43.852825] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852829] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.852832] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.852839] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.586 [2024-07-15 20:38:43.852849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.853028] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.853035] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.586 [2024-07-15 20:38:43.853038] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853042] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.586 [2024-07-15 20:38:43.853047] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:51.586 [2024-07-15 20:38:43.853056] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.853070] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.586 [2024-07-15 20:38:43.853079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.853295] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.853302] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.586 [2024-07-15 20:38:43.853305] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853309] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.586 [2024-07-15 20:38:43.853313] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:24:51.586 [2024-07-15 20:38:43.853318] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:24:51.586 [2024-07-15 20:38:43.853325] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:51.586 [2024-07-15 20:38:43.853430] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:24:51.586 [2024-07-15 20:38:43.853437] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:51.586 [2024-07-15 20:38:43.853445] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853449] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.853459] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.586 [2024-07-15 20:38:43.853470] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.853696] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.853703] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.586 [2024-07-15 20:38:43.853706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853710] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.586 [2024-07-15 20:38:43.853714] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:51.586 [2024-07-15 20:38:43.853723] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853727] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853731] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.853737] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.586 [2024-07-15 20:38:43.853747] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.853948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.853955] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.586 [2024-07-15 20:38:43.853958] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.853962] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.586 [2024-07-15 20:38:43.853966] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:51.586 [2024-07-15 20:38:43.853971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:24:51.586 [2024-07-15 20:38:43.853978] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:24:51.586 [2024-07-15 20:38:43.853990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:24:51.586 [2024-07-15 20:38:43.853999] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.854002] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.586 [2024-07-15 20:38:43.854009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.586 [2024-07-15 20:38:43.854019] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.586 [2024-07-15 20:38:43.854265] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.586 [2024-07-15 20:38:43.854272] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.586 [2024-07-15 20:38:43.854275] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.854279] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x86cec0): datao=0, datal=4096, cccid=0 00:24:51.586 [2024-07-15 20:38:43.854284] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8efe40) on tqpair(0x86cec0): expected_datao=0, payload_size=4096 00:24:51.586 [2024-07-15 20:38:43.854290] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.854363] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.854367] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.586 [2024-07-15 20:38:43.854596] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.586 [2024-07-15 20:38:43.854603] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.587 [2024-07-15 20:38:43.854606] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.587 [2024-07-15 20:38:43.854619] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:24:51.587 [2024-07-15 20:38:43.854624] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:24:51.587 [2024-07-15 20:38:43.854628] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:24:51.587 [2024-07-15 20:38:43.854633] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:24:51.587 [2024-07-15 20:38:43.854637] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:24:51.587 [2024-07-15 20:38:43.854642] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:24:51.587 [2024-07-15 20:38:43.854650] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:24:51.587 [2024-07-15 20:38:43.854656] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854660] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854664] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.854671] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:51.587 [2024-07-15 20:38:43.854681] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.587 [2024-07-15 20:38:43.854866] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.587 [2024-07-15 20:38:43.854872] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.587 [2024-07-15 20:38:43.854875] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.587 [2024-07-15 20:38:43.854886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854890] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.854899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.587 [2024-07-15 20:38:43.854905] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854909] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854912] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.854918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.587 [2024-07-15 20:38:43.854924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.854938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.587 [2024-07-15 20:38:43.854944] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854948] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854951] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.854957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.587 [2024-07-15 20:38:43.854961] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:24:51.587 [2024-07-15 20:38:43.854971] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:51.587 [2024-07-15 20:38:43.854977] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.854981] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.854987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.587 [2024-07-15 20:38:43.854999] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8efe40, cid 0, qid 0 00:24:51.587 [2024-07-15 20:38:43.855004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8effc0, cid 1, qid 0 00:24:51.587 [2024-07-15 20:38:43.855009] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0140, cid 2, qid 0 00:24:51.587 [2024-07-15 20:38:43.855013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.587 [2024-07-15 20:38:43.855018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0440, cid 4, qid 0 00:24:51.587 [2024-07-15 20:38:43.855296] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.587 [2024-07-15 20:38:43.855303] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.587 [2024-07-15 20:38:43.855306] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855310] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f0440) on tqpair=0x86cec0 00:24:51.587 [2024-07-15 20:38:43.855315] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:24:51.587 [2024-07-15 20:38:43.855320] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:24:51.587 [2024-07-15 20:38:43.855330] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855334] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.855340] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.587 [2024-07-15 20:38:43.855350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0440, cid 4, qid 0 00:24:51.587 [2024-07-15 20:38:43.855558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.587 [2024-07-15 20:38:43.855565] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.587 [2024-07-15 20:38:43.855569] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855572] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x86cec0): datao=0, datal=4096, cccid=4 00:24:51.587 [2024-07-15 20:38:43.855576] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f0440) on tqpair(0x86cec0): expected_datao=0, payload_size=4096 00:24:51.587 [2024-07-15 20:38:43.855581] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855587] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855593] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855800] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.587 [2024-07-15 20:38:43.855806] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.587 [2024-07-15 20:38:43.855810] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855814] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f0440) on tqpair=0x86cec0 00:24:51.587 [2024-07-15 20:38:43.855824] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:24:51.587 [2024-07-15 20:38:43.855846] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855850] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.855856] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.587 [2024-07-15 20:38:43.855863] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855867] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.855870] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.855876] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.587 [2024-07-15 20:38:43.855889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0440, cid 4, qid 0 00:24:51.587 [2024-07-15 20:38:43.855894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f05c0, cid 5, qid 0 00:24:51.587 [2024-07-15 20:38:43.856173] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.587 [2024-07-15 20:38:43.856180] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.587 [2024-07-15 20:38:43.856183] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.856187] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x86cec0): datao=0, datal=1024, cccid=4 00:24:51.587 [2024-07-15 20:38:43.856191] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f0440) on tqpair(0x86cec0): expected_datao=0, payload_size=1024 00:24:51.587 [2024-07-15 20:38:43.856195] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.856202] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.856205] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.856211] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.587 [2024-07-15 20:38:43.856216] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.587 [2024-07-15 20:38:43.856219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.856223] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f05c0) on tqpair=0x86cec0 00:24:51.587 [2024-07-15 20:38:43.900239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.587 [2024-07-15 20:38:43.900251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.587 [2024-07-15 20:38:43.900254] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.900258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f0440) on tqpair=0x86cec0 00:24:51.587 [2024-07-15 20:38:43.900272] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.900277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x86cec0) 00:24:51.587 [2024-07-15 20:38:43.900283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.587 [2024-07-15 20:38:43.900299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0440, cid 4, qid 0 00:24:51.587 [2024-07-15 20:38:43.900503] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.587 [2024-07-15 20:38:43.900509] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.587 [2024-07-15 20:38:43.900515] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.900519] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x86cec0): datao=0, datal=3072, cccid=4 00:24:51.587 [2024-07-15 20:38:43.900523] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f0440) on tqpair(0x86cec0): expected_datao=0, payload_size=3072 00:24:51.587 [2024-07-15 20:38:43.900527] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.900555] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.900559] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.900765] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.587 [2024-07-15 20:38:43.900771] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.587 [2024-07-15 20:38:43.900774] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.587 [2024-07-15 20:38:43.900778] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f0440) on tqpair=0x86cec0 00:24:51.587 [2024-07-15 20:38:43.900786] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.588 [2024-07-15 20:38:43.900790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x86cec0) 00:24:51.588 [2024-07-15 20:38:43.900796] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.588 [2024-07-15 20:38:43.900810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f0440, cid 4, qid 0 00:24:51.588 [2024-07-15 20:38:43.901039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.588 [2024-07-15 20:38:43.901045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.588 [2024-07-15 20:38:43.901049] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.588 [2024-07-15 20:38:43.901052] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x86cec0): datao=0, datal=8, cccid=4 00:24:51.588 [2024-07-15 20:38:43.901057] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8f0440) on tqpair(0x86cec0): expected_datao=0, payload_size=8 00:24:51.588 [2024-07-15 20:38:43.901061] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.588 [2024-07-15 20:38:43.901067] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.588 [2024-07-15 20:38:43.901071] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.588 [2024-07-15 20:38:43.941471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.588 [2024-07-15 20:38:43.941480] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.588 [2024-07-15 20:38:43.941484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.588 [2024-07-15 20:38:43.941488] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f0440) on tqpair=0x86cec0 00:24:51.588 ===================================================== 00:24:51.588 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:51.588 ===================================================== 00:24:51.588 Controller Capabilities/Features 00:24:51.588 ================================ 00:24:51.588 Vendor ID: 0000 00:24:51.588 Subsystem Vendor ID: 0000 00:24:51.588 Serial Number: .................... 00:24:51.588 Model Number: ........................................ 00:24:51.588 Firmware Version: 24.09 00:24:51.588 Recommended Arb Burst: 0 00:24:51.588 IEEE OUI Identifier: 00 00 00 00:24:51.588 Multi-path I/O 00:24:51.588 May have multiple subsystem ports: No 00:24:51.588 May have multiple controllers: No 00:24:51.588 Associated with SR-IOV VF: No 00:24:51.588 Max Data Transfer Size: 131072 00:24:51.588 Max Number of Namespaces: 0 00:24:51.588 Max Number of I/O Queues: 1024 00:24:51.588 NVMe Specification Version (VS): 1.3 00:24:51.588 NVMe Specification Version (Identify): 1.3 00:24:51.588 Maximum Queue Entries: 128 00:24:51.588 Contiguous Queues Required: Yes 00:24:51.588 Arbitration Mechanisms Supported 00:24:51.588 Weighted Round Robin: Not Supported 00:24:51.588 Vendor Specific: Not Supported 00:24:51.588 Reset Timeout: 15000 ms 00:24:51.588 Doorbell Stride: 4 bytes 00:24:51.588 NVM Subsystem Reset: Not Supported 00:24:51.588 Command Sets Supported 00:24:51.588 NVM Command Set: Supported 00:24:51.588 Boot Partition: Not Supported 00:24:51.588 Memory Page Size Minimum: 4096 bytes 00:24:51.588 Memory Page Size Maximum: 4096 bytes 00:24:51.588 Persistent Memory Region: Not Supported 00:24:51.588 Optional Asynchronous Events Supported 00:24:51.588 Namespace Attribute Notices: Not Supported 00:24:51.588 Firmware Activation Notices: Not Supported 00:24:51.588 ANA Change Notices: Not Supported 00:24:51.588 PLE Aggregate Log Change Notices: Not Supported 00:24:51.588 LBA Status Info Alert Notices: Not Supported 00:24:51.588 EGE Aggregate Log Change Notices: Not Supported 00:24:51.588 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.588 Zone Descriptor Change Notices: Not Supported 00:24:51.588 Discovery Log Change Notices: Supported 00:24:51.588 Controller Attributes 00:24:51.588 128-bit Host Identifier: Not Supported 00:24:51.588 Non-Operational Permissive Mode: Not Supported 00:24:51.588 NVM Sets: Not Supported 00:24:51.588 Read Recovery Levels: Not Supported 00:24:51.588 Endurance Groups: Not Supported 00:24:51.588 Predictable Latency Mode: Not Supported 00:24:51.588 Traffic Based Keep ALive: Not Supported 00:24:51.588 Namespace Granularity: Not Supported 00:24:51.588 SQ Associations: Not Supported 00:24:51.588 UUID List: Not Supported 00:24:51.588 Multi-Domain Subsystem: Not Supported 00:24:51.588 Fixed Capacity Management: Not Supported 00:24:51.588 Variable Capacity Management: Not Supported 00:24:51.588 Delete Endurance Group: Not Supported 00:24:51.588 Delete NVM Set: Not Supported 00:24:51.588 Extended LBA Formats Supported: Not Supported 00:24:51.588 Flexible Data Placement Supported: Not Supported 00:24:51.588 00:24:51.588 Controller Memory Buffer Support 00:24:51.588 ================================ 00:24:51.588 Supported: No 00:24:51.588 00:24:51.588 Persistent Memory Region Support 00:24:51.588 ================================ 00:24:51.588 Supported: No 00:24:51.588 00:24:51.588 Admin Command Set Attributes 00:24:51.588 ============================ 00:24:51.588 Security Send/Receive: Not Supported 00:24:51.588 Format NVM: Not Supported 00:24:51.588 Firmware Activate/Download: Not Supported 00:24:51.588 Namespace Management: Not Supported 00:24:51.588 Device Self-Test: Not Supported 00:24:51.588 Directives: Not Supported 00:24:51.588 NVMe-MI: Not Supported 00:24:51.588 Virtualization Management: Not Supported 00:24:51.588 Doorbell Buffer Config: Not Supported 00:24:51.588 Get LBA Status Capability: Not Supported 00:24:51.588 Command & Feature Lockdown Capability: Not Supported 00:24:51.588 Abort Command Limit: 1 00:24:51.588 Async Event Request Limit: 4 00:24:51.588 Number of Firmware Slots: N/A 00:24:51.588 Firmware Slot 1 Read-Only: N/A 00:24:51.588 Firmware Activation Without Reset: N/A 00:24:51.588 Multiple Update Detection Support: N/A 00:24:51.588 Firmware Update Granularity: No Information Provided 00:24:51.588 Per-Namespace SMART Log: No 00:24:51.588 Asymmetric Namespace Access Log Page: Not Supported 00:24:51.588 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:51.588 Command Effects Log Page: Not Supported 00:24:51.588 Get Log Page Extended Data: Supported 00:24:51.588 Telemetry Log Pages: Not Supported 00:24:51.588 Persistent Event Log Pages: Not Supported 00:24:51.588 Supported Log Pages Log Page: May Support 00:24:51.588 Commands Supported & Effects Log Page: Not Supported 00:24:51.588 Feature Identifiers & Effects Log Page:May Support 00:24:51.588 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.588 Data Area 4 for Telemetry Log: Not Supported 00:24:51.588 Error Log Page Entries Supported: 128 00:24:51.588 Keep Alive: Not Supported 00:24:51.588 00:24:51.588 NVM Command Set Attributes 00:24:51.588 ========================== 00:24:51.588 Submission Queue Entry Size 00:24:51.588 Max: 1 00:24:51.588 Min: 1 00:24:51.588 Completion Queue Entry Size 00:24:51.588 Max: 1 00:24:51.588 Min: 1 00:24:51.588 Number of Namespaces: 0 00:24:51.588 Compare Command: Not Supported 00:24:51.588 Write Uncorrectable Command: Not Supported 00:24:51.588 Dataset Management Command: Not Supported 00:24:51.588 Write Zeroes Command: Not Supported 00:24:51.588 Set Features Save Field: Not Supported 00:24:51.588 Reservations: Not Supported 00:24:51.588 Timestamp: Not Supported 00:24:51.588 Copy: Not Supported 00:24:51.588 Volatile Write Cache: Not Present 00:24:51.588 Atomic Write Unit (Normal): 1 00:24:51.588 Atomic Write Unit (PFail): 1 00:24:51.588 Atomic Compare & Write Unit: 1 00:24:51.588 Fused Compare & Write: Supported 00:24:51.588 Scatter-Gather List 00:24:51.588 SGL Command Set: Supported 00:24:51.588 SGL Keyed: Supported 00:24:51.588 SGL Bit Bucket Descriptor: Not Supported 00:24:51.588 SGL Metadata Pointer: Not Supported 00:24:51.588 Oversized SGL: Not Supported 00:24:51.588 SGL Metadata Address: Not Supported 00:24:51.588 SGL Offset: Supported 00:24:51.588 Transport SGL Data Block: Not Supported 00:24:51.588 Replay Protected Memory Block: Not Supported 00:24:51.588 00:24:51.588 Firmware Slot Information 00:24:51.588 ========================= 00:24:51.588 Active slot: 0 00:24:51.588 00:24:51.588 00:24:51.588 Error Log 00:24:51.588 ========= 00:24:51.588 00:24:51.588 Active Namespaces 00:24:51.588 ================= 00:24:51.588 Discovery Log Page 00:24:51.588 ================== 00:24:51.588 Generation Counter: 2 00:24:51.588 Number of Records: 2 00:24:51.588 Record Format: 0 00:24:51.588 00:24:51.588 Discovery Log Entry 0 00:24:51.588 ---------------------- 00:24:51.588 Transport Type: 3 (TCP) 00:24:51.588 Address Family: 1 (IPv4) 00:24:51.588 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:51.588 Entry Flags: 00:24:51.588 Duplicate Returned Information: 1 00:24:51.588 Explicit Persistent Connection Support for Discovery: 1 00:24:51.588 Transport Requirements: 00:24:51.588 Secure Channel: Not Required 00:24:51.588 Port ID: 0 (0x0000) 00:24:51.588 Controller ID: 65535 (0xffff) 00:24:51.588 Admin Max SQ Size: 128 00:24:51.588 Transport Service Identifier: 4420 00:24:51.588 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:51.588 Transport Address: 10.0.0.2 00:24:51.588 Discovery Log Entry 1 00:24:51.588 ---------------------- 00:24:51.588 Transport Type: 3 (TCP) 00:24:51.588 Address Family: 1 (IPv4) 00:24:51.588 Subsystem Type: 2 (NVM Subsystem) 00:24:51.588 Entry Flags: 00:24:51.589 Duplicate Returned Information: 0 00:24:51.589 Explicit Persistent Connection Support for Discovery: 0 00:24:51.589 Transport Requirements: 00:24:51.589 Secure Channel: Not Required 00:24:51.589 Port ID: 0 (0x0000) 00:24:51.589 Controller ID: 65535 (0xffff) 00:24:51.589 Admin Max SQ Size: 128 00:24:51.589 Transport Service Identifier: 4420 00:24:51.589 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:24:51.589 Transport Address: 10.0.0.2 [2024-07-15 20:38:43.941575] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:24:51.589 [2024-07-15 20:38:43.941586] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8efe40) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.941592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.589 [2024-07-15 20:38:43.941597] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8effc0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.941602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.589 [2024-07-15 20:38:43.941607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f0140) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.941611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.589 [2024-07-15 20:38:43.941616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.941620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.589 [2024-07-15 20:38:43.941632] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.941637] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.941640] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.941647] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.941661] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.941942] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.941949] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.941952] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.941956] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.941963] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.941966] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.941970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.941976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.941989] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.942248] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.942255] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.942259] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942263] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.942267] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:24:51.589 [2024-07-15 20:38:43.942272] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:24:51.589 [2024-07-15 20:38:43.942281] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942285] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942288] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.942295] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.942305] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.942522] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.942528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.942531] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.942545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942549] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942553] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.942559] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.942569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.942795] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.942801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.942806] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942810] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.942820] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942824] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.942827] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.942833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.942843] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.943098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.943104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.943107] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943111] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.943120] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943124] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943127] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.943134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.943144] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.943401] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.943407] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.943411] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943414] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.943424] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.943438] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.943448] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.943661] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.943667] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.943670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943674] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.589 [2024-07-15 20:38:43.943684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943688] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.589 [2024-07-15 20:38:43.943691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.589 [2024-07-15 20:38:43.943698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.589 [2024-07-15 20:38:43.943707] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.589 [2024-07-15 20:38:43.943955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.589 [2024-07-15 20:38:43.943962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.589 [2024-07-15 20:38:43.943965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.590 [2024-07-15 20:38:43.943970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.590 [2024-07-15 20:38:43.943980] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.590 [2024-07-15 20:38:43.943984] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.590 [2024-07-15 20:38:43.943987] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.590 [2024-07-15 20:38:43.943994] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.590 [2024-07-15 20:38:43.944004] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.590 [2024-07-15 20:38:43.944205] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.590 [2024-07-15 20:38:43.944211] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.590 [2024-07-15 20:38:43.944214] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.590 [2024-07-15 20:38:43.944218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.590 [2024-07-15 20:38:43.944227] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.590 [2024-07-15 20:38:43.948240] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.590 [2024-07-15 20:38:43.948244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x86cec0) 00:24:51.590 [2024-07-15 20:38:43.948251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.590 [2024-07-15 20:38:43.948263] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8f02c0, cid 3, qid 0 00:24:51.590 [2024-07-15 20:38:43.948442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.590 [2024-07-15 20:38:43.948448] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.590 [2024-07-15 20:38:43.948452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.590 [2024-07-15 20:38:43.948456] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8f02c0) on tqpair=0x86cec0 00:24:51.590 [2024-07-15 20:38:43.948463] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:24:51.590 00:24:51.590 20:38:43 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:24:51.852 [2024-07-15 20:38:43.989018] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:24:51.852 [2024-07-15 20:38:43.989087] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1439280 ] 00:24:51.852 EAL: No free 2048 kB hugepages reported on node 1 00:24:51.852 [2024-07-15 20:38:44.019847] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:24:51.852 [2024-07-15 20:38:44.019880] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:24:51.852 [2024-07-15 20:38:44.019885] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:24:51.852 [2024-07-15 20:38:44.019896] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:24:51.852 [2024-07-15 20:38:44.019901] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:24:51.852 [2024-07-15 20:38:44.027262] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:24:51.852 [2024-07-15 20:38:44.027286] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x17d1ec0 0 00:24:51.852 [2024-07-15 20:38:44.027496] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:24:51.852 [2024-07-15 20:38:44.027505] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:24:51.852 [2024-07-15 20:38:44.027509] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:24:51.852 [2024-07-15 20:38:44.027512] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:24:51.852 [2024-07-15 20:38:44.027538] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.027543] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.027547] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.852 [2024-07-15 20:38:44.027557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:24:51.852 [2024-07-15 20:38:44.027569] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.852 [2024-07-15 20:38:44.035240] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.852 [2024-07-15 20:38:44.035250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.852 [2024-07-15 20:38:44.035253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035258] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.852 [2024-07-15 20:38:44.035268] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:24:51.852 [2024-07-15 20:38:44.035274] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:24:51.852 [2024-07-15 20:38:44.035280] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:24:51.852 [2024-07-15 20:38:44.035294] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035299] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.852 [2024-07-15 20:38:44.035310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.852 [2024-07-15 20:38:44.035322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.852 [2024-07-15 20:38:44.035507] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.852 [2024-07-15 20:38:44.035514] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.852 [2024-07-15 20:38:44.035517] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035521] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.852 [2024-07-15 20:38:44.035528] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:24:51.852 [2024-07-15 20:38:44.035535] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:24:51.852 [2024-07-15 20:38:44.035541] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035545] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035548] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.852 [2024-07-15 20:38:44.035555] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.852 [2024-07-15 20:38:44.035565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.852 [2024-07-15 20:38:44.035773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.852 [2024-07-15 20:38:44.035779] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.852 [2024-07-15 20:38:44.035782] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.852 [2024-07-15 20:38:44.035794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:24:51.852 [2024-07-15 20:38:44.035801] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:24:51.852 [2024-07-15 20:38:44.035808] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035811] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.035815] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.852 [2024-07-15 20:38:44.035821] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.852 [2024-07-15 20:38:44.035831] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.852 [2024-07-15 20:38:44.036039] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.852 [2024-07-15 20:38:44.036045] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.852 [2024-07-15 20:38:44.036049] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036052] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.852 [2024-07-15 20:38:44.036057] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:24:51.852 [2024-07-15 20:38:44.036066] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036070] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036074] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.852 [2024-07-15 20:38:44.036080] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.852 [2024-07-15 20:38:44.036090] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.852 [2024-07-15 20:38:44.036310] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.852 [2024-07-15 20:38:44.036317] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.852 [2024-07-15 20:38:44.036320] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036324] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.852 [2024-07-15 20:38:44.036328] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:24:51.852 [2024-07-15 20:38:44.036333] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:24:51.852 [2024-07-15 20:38:44.036340] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:24:51.852 [2024-07-15 20:38:44.036445] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:24:51.852 [2024-07-15 20:38:44.036449] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:24:51.852 [2024-07-15 20:38:44.036456] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.852 [2024-07-15 20:38:44.036470] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.852 [2024-07-15 20:38:44.036481] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.852 [2024-07-15 20:38:44.036685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.852 [2024-07-15 20:38:44.036692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.852 [2024-07-15 20:38:44.036697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.852 [2024-07-15 20:38:44.036706] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:24:51.852 [2024-07-15 20:38:44.036715] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036718] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036722] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.852 [2024-07-15 20:38:44.036729] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.852 [2024-07-15 20:38:44.036739] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.852 [2024-07-15 20:38:44.036955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.852 [2024-07-15 20:38:44.036962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.852 [2024-07-15 20:38:44.036965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.852 [2024-07-15 20:38:44.036969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.852 [2024-07-15 20:38:44.036973] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:24:51.853 [2024-07-15 20:38:44.036978] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.036985] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:24:51.853 [2024-07-15 20:38:44.036992] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.037000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.037004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.037010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.853 [2024-07-15 20:38:44.037020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.853 [2024-07-15 20:38:44.037238] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.853 [2024-07-15 20:38:44.037245] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.853 [2024-07-15 20:38:44.037248] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.037252] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=4096, cccid=0 00:24:51.853 [2024-07-15 20:38:44.037257] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1854e40) on tqpair(0x17d1ec0): expected_datao=0, payload_size=4096 00:24:51.853 [2024-07-15 20:38:44.037261] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.037286] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.037291] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078420] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.853 [2024-07-15 20:38:44.078432] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.853 [2024-07-15 20:38:44.078436] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078440] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.853 [2024-07-15 20:38:44.078450] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:24:51.853 [2024-07-15 20:38:44.078455] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:24:51.853 [2024-07-15 20:38:44.078462] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:24:51.853 [2024-07-15 20:38:44.078466] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:24:51.853 [2024-07-15 20:38:44.078471] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:24:51.853 [2024-07-15 20:38:44.078475] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.078483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.078490] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078494] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078497] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.078505] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:51.853 [2024-07-15 20:38:44.078517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.853 [2024-07-15 20:38:44.078686] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.853 [2024-07-15 20:38:44.078692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.853 [2024-07-15 20:38:44.078696] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078700] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.853 [2024-07-15 20:38:44.078706] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078710] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078714] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.078720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.853 [2024-07-15 20:38:44.078726] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078730] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078733] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.078739] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.853 [2024-07-15 20:38:44.078745] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078748] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078752] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.078757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.853 [2024-07-15 20:38:44.078763] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078767] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.078776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.853 [2024-07-15 20:38:44.078781] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.078791] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.078797] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.078803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.078810] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.853 [2024-07-15 20:38:44.078821] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854e40, cid 0, qid 0 00:24:51.853 [2024-07-15 20:38:44.078827] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1854fc0, cid 1, qid 0 00:24:51.853 [2024-07-15 20:38:44.078832] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855140, cid 2, qid 0 00:24:51.853 [2024-07-15 20:38:44.078836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.853 [2024-07-15 20:38:44.078841] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855440, cid 4, qid 0 00:24:51.853 [2024-07-15 20:38:44.079054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.853 [2024-07-15 20:38:44.079061] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.853 [2024-07-15 20:38:44.079064] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.079068] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855440) on tqpair=0x17d1ec0 00:24:51.853 [2024-07-15 20:38:44.079073] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:24:51.853 [2024-07-15 20:38:44.079078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.079085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.079091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.079097] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.079101] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.079105] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.079111] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:51.853 [2024-07-15 20:38:44.079121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855440, cid 4, qid 0 00:24:51.853 [2024-07-15 20:38:44.083241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.853 [2024-07-15 20:38:44.083250] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.853 [2024-07-15 20:38:44.083253] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.083257] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855440) on tqpair=0x17d1ec0 00:24:51.853 [2024-07-15 20:38:44.083319] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.083328] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.083335] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.083338] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.083345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.853 [2024-07-15 20:38:44.083356] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855440, cid 4, qid 0 00:24:51.853 [2024-07-15 20:38:44.083535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.853 [2024-07-15 20:38:44.083542] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.853 [2024-07-15 20:38:44.083548] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.083552] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=4096, cccid=4 00:24:51.853 [2024-07-15 20:38:44.083556] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1855440) on tqpair(0x17d1ec0): expected_datao=0, payload_size=4096 00:24:51.853 [2024-07-15 20:38:44.083560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.083585] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.083590] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.124415] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.853 [2024-07-15 20:38:44.124425] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.853 [2024-07-15 20:38:44.124428] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.124432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855440) on tqpair=0x17d1ec0 00:24:51.853 [2024-07-15 20:38:44.124441] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:24:51.853 [2024-07-15 20:38:44.124453] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.124463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:24:51.853 [2024-07-15 20:38:44.124470] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.853 [2024-07-15 20:38:44.124473] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17d1ec0) 00:24:51.853 [2024-07-15 20:38:44.124480] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.124491] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855440, cid 4, qid 0 00:24:51.854 [2024-07-15 20:38:44.124756] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.854 [2024-07-15 20:38:44.124763] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.854 [2024-07-15 20:38:44.124766] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.124770] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=4096, cccid=4 00:24:51.854 [2024-07-15 20:38:44.124774] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1855440) on tqpair(0x17d1ec0): expected_datao=0, payload_size=4096 00:24:51.854 [2024-07-15 20:38:44.124778] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.124785] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.124788] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.169241] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.854 [2024-07-15 20:38:44.169253] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.854 [2024-07-15 20:38:44.169256] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.169261] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855440) on tqpair=0x17d1ec0 00:24:51.854 [2024-07-15 20:38:44.169274] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.169284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.169292] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.169295] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.169303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.169318] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855440, cid 4, qid 0 00:24:51.854 [2024-07-15 20:38:44.169536] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.854 [2024-07-15 20:38:44.169543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.854 [2024-07-15 20:38:44.169547] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.169550] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=4096, cccid=4 00:24:51.854 [2024-07-15 20:38:44.169555] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1855440) on tqpair(0x17d1ec0): expected_datao=0, payload_size=4096 00:24:51.854 [2024-07-15 20:38:44.169559] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.169615] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.169619] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210399] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.854 [2024-07-15 20:38:44.210410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.854 [2024-07-15 20:38:44.210413] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855440) on tqpair=0x17d1ec0 00:24:51.854 [2024-07-15 20:38:44.210425] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.210432] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.210445] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.210451] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.210456] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.210461] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.210465] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:24:51.854 [2024-07-15 20:38:44.210470] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:24:51.854 [2024-07-15 20:38:44.210475] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:24:51.854 [2024-07-15 20:38:44.210487] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.210498] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.210505] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210508] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210512] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.210518] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:24:51.854 [2024-07-15 20:38:44.210532] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855440, cid 4, qid 0 00:24:51.854 [2024-07-15 20:38:44.210537] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18555c0, cid 5, qid 0 00:24:51.854 [2024-07-15 20:38:44.210672] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.854 [2024-07-15 20:38:44.210679] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.854 [2024-07-15 20:38:44.210685] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210689] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855440) on tqpair=0x17d1ec0 00:24:51.854 [2024-07-15 20:38:44.210695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.854 [2024-07-15 20:38:44.210701] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.854 [2024-07-15 20:38:44.210704] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210708] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18555c0) on tqpair=0x17d1ec0 00:24:51.854 [2024-07-15 20:38:44.210717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.210727] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.210737] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18555c0, cid 5, qid 0 00:24:51.854 [2024-07-15 20:38:44.210936] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.854 [2024-07-15 20:38:44.210943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.854 [2024-07-15 20:38:44.210946] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210950] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18555c0) on tqpair=0x17d1ec0 00:24:51.854 [2024-07-15 20:38:44.210958] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.210962] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.210968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.210978] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18555c0, cid 5, qid 0 00:24:51.854 [2024-07-15 20:38:44.211193] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.854 [2024-07-15 20:38:44.211199] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.854 [2024-07-15 20:38:44.211203] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.211207] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18555c0) on tqpair=0x17d1ec0 00:24:51.854 [2024-07-15 20:38:44.211216] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.211220] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.211226] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.215245] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18555c0, cid 5, qid 0 00:24:51.854 [2024-07-15 20:38:44.215410] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.854 [2024-07-15 20:38:44.215417] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.854 [2024-07-15 20:38:44.215420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.215424] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18555c0) on tqpair=0x17d1ec0 00:24:51.854 [2024-07-15 20:38:44.215438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.215442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.215448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.215455] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.215459] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.215467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.215474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.215478] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x17d1ec0) 00:24:51.854 [2024-07-15 20:38:44.215484] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.854 [2024-07-15 20:38:44.215491] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.854 [2024-07-15 20:38:44.215494] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17d1ec0) 00:24:51.855 [2024-07-15 20:38:44.215500] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.855 [2024-07-15 20:38:44.215511] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18555c0, cid 5, qid 0 00:24:51.855 [2024-07-15 20:38:44.215517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855440, cid 4, qid 0 00:24:51.855 [2024-07-15 20:38:44.215521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1855740, cid 6, qid 0 00:24:51.855 [2024-07-15 20:38:44.215526] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18558c0, cid 7, qid 0 00:24:51.855 [2024-07-15 20:38:44.215771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.855 [2024-07-15 20:38:44.215778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.855 [2024-07-15 20:38:44.215781] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215785] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=8192, cccid=5 00:24:51.855 [2024-07-15 20:38:44.215789] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18555c0) on tqpair(0x17d1ec0): expected_datao=0, payload_size=8192 00:24:51.855 [2024-07-15 20:38:44.215794] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215870] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215874] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.855 [2024-07-15 20:38:44.215885] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.855 [2024-07-15 20:38:44.215891] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215894] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=512, cccid=4 00:24:51.855 [2024-07-15 20:38:44.215898] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1855440) on tqpair(0x17d1ec0): expected_datao=0, payload_size=512 00:24:51.855 [2024-07-15 20:38:44.215902] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215909] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215912] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215918] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.855 [2024-07-15 20:38:44.215923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.855 [2024-07-15 20:38:44.215926] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215930] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=512, cccid=6 00:24:51.855 [2024-07-15 20:38:44.215934] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1855740) on tqpair(0x17d1ec0): expected_datao=0, payload_size=512 00:24:51.855 [2024-07-15 20:38:44.215938] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215945] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215948] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:24:51.855 [2024-07-15 20:38:44.215961] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:24:51.855 [2024-07-15 20:38:44.215964] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215968] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x17d1ec0): datao=0, datal=4096, cccid=7 00:24:51.855 [2024-07-15 20:38:44.215972] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x18558c0) on tqpair(0x17d1ec0): expected_datao=0, payload_size=4096 00:24:51.855 [2024-07-15 20:38:44.215976] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215983] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.215986] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.216018] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.855 [2024-07-15 20:38:44.216024] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.855 [2024-07-15 20:38:44.216027] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.216031] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18555c0) on tqpair=0x17d1ec0 00:24:51.855 [2024-07-15 20:38:44.216043] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.855 [2024-07-15 20:38:44.216049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.855 [2024-07-15 20:38:44.216052] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.216056] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855440) on tqpair=0x17d1ec0 00:24:51.855 [2024-07-15 20:38:44.216065] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.855 [2024-07-15 20:38:44.216071] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.855 [2024-07-15 20:38:44.216074] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.216078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855740) on tqpair=0x17d1ec0 00:24:51.855 [2024-07-15 20:38:44.216085] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.855 [2024-07-15 20:38:44.216090] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.855 [2024-07-15 20:38:44.216094] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.855 [2024-07-15 20:38:44.216097] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18558c0) on tqpair=0x17d1ec0 00:24:51.855 ===================================================== 00:24:51.855 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:51.855 ===================================================== 00:24:51.855 Controller Capabilities/Features 00:24:51.855 ================================ 00:24:51.855 Vendor ID: 8086 00:24:51.855 Subsystem Vendor ID: 8086 00:24:51.855 Serial Number: SPDK00000000000001 00:24:51.855 Model Number: SPDK bdev Controller 00:24:51.855 Firmware Version: 24.09 00:24:51.855 Recommended Arb Burst: 6 00:24:51.855 IEEE OUI Identifier: e4 d2 5c 00:24:51.855 Multi-path I/O 00:24:51.855 May have multiple subsystem ports: Yes 00:24:51.855 May have multiple controllers: Yes 00:24:51.855 Associated with SR-IOV VF: No 00:24:51.855 Max Data Transfer Size: 131072 00:24:51.855 Max Number of Namespaces: 32 00:24:51.855 Max Number of I/O Queues: 127 00:24:51.855 NVMe Specification Version (VS): 1.3 00:24:51.855 NVMe Specification Version (Identify): 1.3 00:24:51.855 Maximum Queue Entries: 128 00:24:51.855 Contiguous Queues Required: Yes 00:24:51.855 Arbitration Mechanisms Supported 00:24:51.855 Weighted Round Robin: Not Supported 00:24:51.855 Vendor Specific: Not Supported 00:24:51.855 Reset Timeout: 15000 ms 00:24:51.855 Doorbell Stride: 4 bytes 00:24:51.855 NVM Subsystem Reset: Not Supported 00:24:51.855 Command Sets Supported 00:24:51.855 NVM Command Set: Supported 00:24:51.855 Boot Partition: Not Supported 00:24:51.855 Memory Page Size Minimum: 4096 bytes 00:24:51.855 Memory Page Size Maximum: 4096 bytes 00:24:51.855 Persistent Memory Region: Not Supported 00:24:51.855 Optional Asynchronous Events Supported 00:24:51.855 Namespace Attribute Notices: Supported 00:24:51.855 Firmware Activation Notices: Not Supported 00:24:51.855 ANA Change Notices: Not Supported 00:24:51.855 PLE Aggregate Log Change Notices: Not Supported 00:24:51.855 LBA Status Info Alert Notices: Not Supported 00:24:51.855 EGE Aggregate Log Change Notices: Not Supported 00:24:51.856 Normal NVM Subsystem Shutdown event: Not Supported 00:24:51.856 Zone Descriptor Change Notices: Not Supported 00:24:51.856 Discovery Log Change Notices: Not Supported 00:24:51.856 Controller Attributes 00:24:51.856 128-bit Host Identifier: Supported 00:24:51.856 Non-Operational Permissive Mode: Not Supported 00:24:51.856 NVM Sets: Not Supported 00:24:51.856 Read Recovery Levels: Not Supported 00:24:51.856 Endurance Groups: Not Supported 00:24:51.856 Predictable Latency Mode: Not Supported 00:24:51.856 Traffic Based Keep ALive: Not Supported 00:24:51.856 Namespace Granularity: Not Supported 00:24:51.856 SQ Associations: Not Supported 00:24:51.856 UUID List: Not Supported 00:24:51.856 Multi-Domain Subsystem: Not Supported 00:24:51.856 Fixed Capacity Management: Not Supported 00:24:51.856 Variable Capacity Management: Not Supported 00:24:51.856 Delete Endurance Group: Not Supported 00:24:51.856 Delete NVM Set: Not Supported 00:24:51.856 Extended LBA Formats Supported: Not Supported 00:24:51.856 Flexible Data Placement Supported: Not Supported 00:24:51.856 00:24:51.856 Controller Memory Buffer Support 00:24:51.856 ================================ 00:24:51.856 Supported: No 00:24:51.856 00:24:51.856 Persistent Memory Region Support 00:24:51.856 ================================ 00:24:51.856 Supported: No 00:24:51.856 00:24:51.856 Admin Command Set Attributes 00:24:51.856 ============================ 00:24:51.856 Security Send/Receive: Not Supported 00:24:51.856 Format NVM: Not Supported 00:24:51.856 Firmware Activate/Download: Not Supported 00:24:51.856 Namespace Management: Not Supported 00:24:51.856 Device Self-Test: Not Supported 00:24:51.856 Directives: Not Supported 00:24:51.856 NVMe-MI: Not Supported 00:24:51.856 Virtualization Management: Not Supported 00:24:51.856 Doorbell Buffer Config: Not Supported 00:24:51.856 Get LBA Status Capability: Not Supported 00:24:51.856 Command & Feature Lockdown Capability: Not Supported 00:24:51.856 Abort Command Limit: 4 00:24:51.856 Async Event Request Limit: 4 00:24:51.856 Number of Firmware Slots: N/A 00:24:51.856 Firmware Slot 1 Read-Only: N/A 00:24:51.856 Firmware Activation Without Reset: N/A 00:24:51.856 Multiple Update Detection Support: N/A 00:24:51.856 Firmware Update Granularity: No Information Provided 00:24:51.856 Per-Namespace SMART Log: No 00:24:51.856 Asymmetric Namespace Access Log Page: Not Supported 00:24:51.856 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:24:51.856 Command Effects Log Page: Supported 00:24:51.856 Get Log Page Extended Data: Supported 00:24:51.856 Telemetry Log Pages: Not Supported 00:24:51.856 Persistent Event Log Pages: Not Supported 00:24:51.856 Supported Log Pages Log Page: May Support 00:24:51.856 Commands Supported & Effects Log Page: Not Supported 00:24:51.856 Feature Identifiers & Effects Log Page:May Support 00:24:51.856 NVMe-MI Commands & Effects Log Page: May Support 00:24:51.856 Data Area 4 for Telemetry Log: Not Supported 00:24:51.856 Error Log Page Entries Supported: 128 00:24:51.856 Keep Alive: Supported 00:24:51.856 Keep Alive Granularity: 10000 ms 00:24:51.856 00:24:51.856 NVM Command Set Attributes 00:24:51.856 ========================== 00:24:51.856 Submission Queue Entry Size 00:24:51.856 Max: 64 00:24:51.856 Min: 64 00:24:51.856 Completion Queue Entry Size 00:24:51.856 Max: 16 00:24:51.856 Min: 16 00:24:51.856 Number of Namespaces: 32 00:24:51.856 Compare Command: Supported 00:24:51.856 Write Uncorrectable Command: Not Supported 00:24:51.856 Dataset Management Command: Supported 00:24:51.856 Write Zeroes Command: Supported 00:24:51.856 Set Features Save Field: Not Supported 00:24:51.856 Reservations: Supported 00:24:51.856 Timestamp: Not Supported 00:24:51.856 Copy: Supported 00:24:51.856 Volatile Write Cache: Present 00:24:51.856 Atomic Write Unit (Normal): 1 00:24:51.856 Atomic Write Unit (PFail): 1 00:24:51.856 Atomic Compare & Write Unit: 1 00:24:51.856 Fused Compare & Write: Supported 00:24:51.856 Scatter-Gather List 00:24:51.856 SGL Command Set: Supported 00:24:51.856 SGL Keyed: Supported 00:24:51.856 SGL Bit Bucket Descriptor: Not Supported 00:24:51.856 SGL Metadata Pointer: Not Supported 00:24:51.856 Oversized SGL: Not Supported 00:24:51.856 SGL Metadata Address: Not Supported 00:24:51.856 SGL Offset: Supported 00:24:51.856 Transport SGL Data Block: Not Supported 00:24:51.856 Replay Protected Memory Block: Not Supported 00:24:51.856 00:24:51.856 Firmware Slot Information 00:24:51.856 ========================= 00:24:51.856 Active slot: 1 00:24:51.856 Slot 1 Firmware Revision: 24.09 00:24:51.856 00:24:51.856 00:24:51.856 Commands Supported and Effects 00:24:51.856 ============================== 00:24:51.856 Admin Commands 00:24:51.856 -------------- 00:24:51.856 Get Log Page (02h): Supported 00:24:51.856 Identify (06h): Supported 00:24:51.856 Abort (08h): Supported 00:24:51.856 Set Features (09h): Supported 00:24:51.856 Get Features (0Ah): Supported 00:24:51.856 Asynchronous Event Request (0Ch): Supported 00:24:51.856 Keep Alive (18h): Supported 00:24:51.856 I/O Commands 00:24:51.856 ------------ 00:24:51.856 Flush (00h): Supported LBA-Change 00:24:51.856 Write (01h): Supported LBA-Change 00:24:51.856 Read (02h): Supported 00:24:51.856 Compare (05h): Supported 00:24:51.856 Write Zeroes (08h): Supported LBA-Change 00:24:51.856 Dataset Management (09h): Supported LBA-Change 00:24:51.856 Copy (19h): Supported LBA-Change 00:24:51.856 00:24:51.856 Error Log 00:24:51.856 ========= 00:24:51.856 00:24:51.856 Arbitration 00:24:51.856 =========== 00:24:51.856 Arbitration Burst: 1 00:24:51.856 00:24:51.856 Power Management 00:24:51.856 ================ 00:24:51.857 Number of Power States: 1 00:24:51.857 Current Power State: Power State #0 00:24:51.857 Power State #0: 00:24:51.857 Max Power: 0.00 W 00:24:51.857 Non-Operational State: Operational 00:24:51.857 Entry Latency: Not Reported 00:24:51.857 Exit Latency: Not Reported 00:24:51.857 Relative Read Throughput: 0 00:24:51.857 Relative Read Latency: 0 00:24:51.857 Relative Write Throughput: 0 00:24:51.857 Relative Write Latency: 0 00:24:51.857 Idle Power: Not Reported 00:24:51.857 Active Power: Not Reported 00:24:51.857 Non-Operational Permissive Mode: Not Supported 00:24:51.857 00:24:51.857 Health Information 00:24:51.857 ================== 00:24:51.857 Critical Warnings: 00:24:51.857 Available Spare Space: OK 00:24:51.857 Temperature: OK 00:24:51.857 Device Reliability: OK 00:24:51.857 Read Only: No 00:24:51.857 Volatile Memory Backup: OK 00:24:51.857 Current Temperature: 0 Kelvin (-273 Celsius) 00:24:51.857 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:24:51.857 Available Spare: 0% 00:24:51.857 Available Spare Threshold: 0% 00:24:51.857 Life Percentage Used:[2024-07-15 20:38:44.216194] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.857 [2024-07-15 20:38:44.216199] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x17d1ec0) 00:24:51.857 [2024-07-15 20:38:44.216205] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.857 [2024-07-15 20:38:44.216217] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18558c0, cid 7, qid 0 00:24:51.857 [2024-07-15 20:38:44.216405] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.857 [2024-07-15 20:38:44.216412] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.857 [2024-07-15 20:38:44.216416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.857 [2024-07-15 20:38:44.216420] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18558c0) on tqpair=0x17d1ec0 00:24:51.857 [2024-07-15 20:38:44.216450] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:24:51.857 [2024-07-15 20:38:44.216459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854e40) on tqpair=0x17d1ec0 00:24:51.857 [2024-07-15 20:38:44.216464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.857 [2024-07-15 20:38:44.216469] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1854fc0) on tqpair=0x17d1ec0 00:24:51.857 [2024-07-15 20:38:44.216474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.857 [2024-07-15 20:38:44.216481] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1855140) on tqpair=0x17d1ec0 00:24:51.858 [2024-07-15 20:38:44.216485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.858 [2024-07-15 20:38:44.216490] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.858 [2024-07-15 20:38:44.216494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:51.858 [2024-07-15 20:38:44.216502] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.216506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.216509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.858 [2024-07-15 20:38:44.216516] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.858 [2024-07-15 20:38:44.216528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.858 [2024-07-15 20:38:44.216729] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.858 [2024-07-15 20:38:44.216735] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.858 [2024-07-15 20:38:44.216738] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.216742] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.858 [2024-07-15 20:38:44.216749] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.216752] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.216756] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.858 [2024-07-15 20:38:44.216762] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.858 [2024-07-15 20:38:44.216775] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.858 [2024-07-15 20:38:44.216970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.858 [2024-07-15 20:38:44.216976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.858 [2024-07-15 20:38:44.216979] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.216983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.858 [2024-07-15 20:38:44.216988] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:24:51.858 [2024-07-15 20:38:44.216992] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:24:51.858 [2024-07-15 20:38:44.217001] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.217005] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.858 [2024-07-15 20:38:44.217008] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.217015] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.217025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.217214] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.217220] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.217223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217227] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.217243] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217247] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217253] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.217259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.217269] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.217436] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.217442] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.217446] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217449] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.217459] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217462] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217466] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.217472] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.217482] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.217656] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.217663] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.217666] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217670] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.217679] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217683] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217686] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.217693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.217703] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.217921] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.217928] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.217932] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.217945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217949] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.217953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.217959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.217969] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.218188] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.218194] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.218197] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.218210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218214] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218217] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.218226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.218241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.218438] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.218445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.218448] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.218461] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218465] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218468] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.218475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.218484] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.218688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.218694] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.218697] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218701] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.218710] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218714] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218718] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.218724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.218734] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.218955] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.218962] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.218965] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218969] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.218978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218981] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.218985] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.859 [2024-07-15 20:38:44.218991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.859 [2024-07-15 20:38:44.219001] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.859 [2024-07-15 20:38:44.219210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.859 [2024-07-15 20:38:44.219217] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.859 [2024-07-15 20:38:44.219220] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.219224] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.859 [2024-07-15 20:38:44.223240] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:24:51.859 [2024-07-15 20:38:44.223246] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:24:51.860 [2024-07-15 20:38:44.223250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x17d1ec0) 00:24:51.860 [2024-07-15 20:38:44.223257] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:51.860 [2024-07-15 20:38:44.223270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x18552c0, cid 3, qid 0 00:24:51.860 [2024-07-15 20:38:44.223439] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:24:51.860 [2024-07-15 20:38:44.223445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:24:51.860 [2024-07-15 20:38:44.223449] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:24:51.860 [2024-07-15 20:38:44.223452] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x18552c0) on tqpair=0x17d1ec0 00:24:51.860 [2024-07-15 20:38:44.223459] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 6 milliseconds 00:24:52.119 0% 00:24:52.119 Data Units Read: 0 00:24:52.119 Data Units Written: 0 00:24:52.119 Host Read Commands: 0 00:24:52.119 Host Write Commands: 0 00:24:52.119 Controller Busy Time: 0 minutes 00:24:52.119 Power Cycles: 0 00:24:52.119 Power On Hours: 0 hours 00:24:52.119 Unsafe Shutdowns: 0 00:24:52.119 Unrecoverable Media Errors: 0 00:24:52.119 Lifetime Error Log Entries: 0 00:24:52.119 Warning Temperature Time: 0 minutes 00:24:52.119 Critical Temperature Time: 0 minutes 00:24:52.119 00:24:52.119 Number of Queues 00:24:52.119 ================ 00:24:52.119 Number of I/O Submission Queues: 127 00:24:52.119 Number of I/O Completion Queues: 127 00:24:52.119 00:24:52.119 Active Namespaces 00:24:52.119 ================= 00:24:52.119 Namespace ID:1 00:24:52.119 Error Recovery Timeout: Unlimited 00:24:52.119 Command Set Identifier: NVM (00h) 00:24:52.119 Deallocate: Supported 00:24:52.119 Deallocated/Unwritten Error: Not Supported 00:24:52.119 Deallocated Read Value: Unknown 00:24:52.119 Deallocate in Write Zeroes: Not Supported 00:24:52.119 Deallocated Guard Field: 0xFFFF 00:24:52.119 Flush: Supported 00:24:52.119 Reservation: Supported 00:24:52.119 Namespace Sharing Capabilities: Multiple Controllers 00:24:52.119 Size (in LBAs): 131072 (0GiB) 00:24:52.119 Capacity (in LBAs): 131072 (0GiB) 00:24:52.119 Utilization (in LBAs): 131072 (0GiB) 00:24:52.119 NGUID: ABCDEF0123456789ABCDEF0123456789 00:24:52.119 EUI64: ABCDEF0123456789 00:24:52.119 UUID: a1227534-845a-4204-b1c3-96fc94429345 00:24:52.119 Thin Provisioning: Not Supported 00:24:52.119 Per-NS Atomic Units: Yes 00:24:52.119 Atomic Boundary Size (Normal): 0 00:24:52.119 Atomic Boundary Size (PFail): 0 00:24:52.119 Atomic Boundary Offset: 0 00:24:52.119 Maximum Single Source Range Length: 65535 00:24:52.119 Maximum Copy Length: 65535 00:24:52.119 Maximum Source Range Count: 1 00:24:52.119 NGUID/EUI64 Never Reused: No 00:24:52.119 Namespace Write Protected: No 00:24:52.119 Number of LBA Formats: 1 00:24:52.119 Current LBA Format: LBA Format #00 00:24:52.119 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:52.119 00:24:52.119 20:38:44 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:24:52.119 20:38:44 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.119 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.119 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.120 rmmod nvme_tcp 00:24:52.120 rmmod nvme_fabrics 00:24:52.120 rmmod nvme_keyring 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1439094 ']' 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1439094 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1439094 ']' 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1439094 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1439094 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1439094' 00:24:52.120 killing process with pid 1439094 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1439094 00:24:52.120 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1439094 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:52.379 20:38:44 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.289 20:38:46 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.289 00:24:54.289 real 0m12.006s 00:24:54.289 user 0m8.349s 00:24:54.289 sys 0m6.396s 00:24:54.289 20:38:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:54.289 20:38:46 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:24:54.289 ************************************ 00:24:54.289 END TEST nvmf_identify 00:24:54.289 ************************************ 00:24:54.289 20:38:46 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:54.289 20:38:46 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:54.289 20:38:46 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:54.289 20:38:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:54.289 20:38:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:54.551 ************************************ 00:24:54.551 START TEST nvmf_perf 00:24:54.551 ************************************ 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:24:54.551 * Looking for test storage... 00:24:54.551 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.551 20:38:46 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:02.785 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:02.785 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:02.785 Found net devices under 0000:31:00.0: cvl_0_0 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:02.785 Found net devices under 0000:31:00.1: cvl_0_1 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:02.785 20:38:54 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:02.785 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:02.785 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.520 ms 00:25:02.785 00:25:02.785 --- 10.0.0.2 ping statistics --- 00:25:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.785 rtt min/avg/max/mdev = 0.520/0.520/0.520/0.000 ms 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:02.785 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:02.785 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:25:02.785 00:25:02.785 --- 10.0.0.1 ping statistics --- 00:25:02.785 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:02.785 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1444123 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1444123 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1444123 ']' 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:02.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:02.785 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:02.785 [2024-07-15 20:38:55.137777] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:25:02.785 [2024-07-15 20:38:55.137829] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.045 EAL: No free 2048 kB hugepages reported on node 1 00:25:03.045 [2024-07-15 20:38:55.213154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.045 [2024-07-15 20:38:55.280349] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.045 [2024-07-15 20:38:55.280386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.045 [2024-07-15 20:38:55.280393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.045 [2024-07-15 20:38:55.280399] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.045 [2024-07-15 20:38:55.280405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.045 [2024-07-15 20:38:55.280541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.045 [2024-07-15 20:38:55.280654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.045 [2024-07-15 20:38:55.280809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.045 [2024-07-15 20:38:55.280810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:03.616 20:38:55 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:04.187 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:04.187 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:04.448 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:65:00.0 00:25:04.448 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:04.448 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:04.448 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:65:00.0 ']' 00:25:04.448 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:04.448 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:04.448 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:04.709 [2024-07-15 20:38:56.929675] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.709 20:38:56 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:04.970 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:04.970 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:04.970 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:04.970 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:05.230 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:05.492 [2024-07-15 20:38:57.616264] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:05.492 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:05.492 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:65:00.0 ']' 00:25:05.492 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:05.492 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:05.492 20:38:57 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:65:00.0' 00:25:06.875 Initializing NVMe Controllers 00:25:06.875 Attached to NVMe Controller at 0000:65:00.0 [144d:a80a] 00:25:06.875 Associating PCIE (0000:65:00.0) NSID 1 with lcore 0 00:25:06.875 Initialization complete. Launching workers. 00:25:06.875 ======================================================== 00:25:06.875 Latency(us) 00:25:06.875 Device Information : IOPS MiB/s Average min max 00:25:06.875 PCIE (0000:65:00.0) NSID 1 from core 0: 79751.09 311.53 400.40 13.60 4912.40 00:25:06.875 ======================================================== 00:25:06.875 Total : 79751.09 311.53 400.40 13.60 4912.40 00:25:06.875 00:25:06.875 20:38:59 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:06.875 EAL: No free 2048 kB hugepages reported on node 1 00:25:08.257 Initializing NVMe Controllers 00:25:08.257 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:08.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:08.257 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:08.257 Initialization complete. Launching workers. 00:25:08.257 ======================================================== 00:25:08.257 Latency(us) 00:25:08.257 Device Information : IOPS MiB/s Average min max 00:25:08.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 110.82 0.43 9403.84 345.03 46239.50 00:25:08.257 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 40.93 0.16 25597.64 7959.86 53893.64 00:25:08.257 ======================================================== 00:25:08.257 Total : 151.75 0.59 13771.90 345.03 53893.64 00:25:08.257 00:25:08.257 20:39:00 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:08.257 EAL: No free 2048 kB hugepages reported on node 1 00:25:09.641 Initializing NVMe Controllers 00:25:09.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:09.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:09.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:09.641 Initialization complete. Launching workers. 00:25:09.641 ======================================================== 00:25:09.641 Latency(us) 00:25:09.641 Device Information : IOPS MiB/s Average min max 00:25:09.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10392.00 40.59 3080.52 409.49 6577.00 00:25:09.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3905.00 15.25 8239.82 6924.69 15778.98 00:25:09.641 ======================================================== 00:25:09.641 Total : 14297.00 55.85 4489.70 409.49 15778.98 00:25:09.641 00:25:09.641 20:39:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:09.641 20:39:01 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:09.641 20:39:01 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:09.641 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.187 Initializing NVMe Controllers 00:25:12.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.187 Controller IO queue size 128, less than required. 00:25:12.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.187 Controller IO queue size 128, less than required. 00:25:12.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:12.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:12.187 Initialization complete. Launching workers. 00:25:12.187 ======================================================== 00:25:12.187 Latency(us) 00:25:12.187 Device Information : IOPS MiB/s Average min max 00:25:12.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1744.73 436.18 74277.43 54878.22 120921.30 00:25:12.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 634.67 158.67 214913.48 63179.29 314127.60 00:25:12.187 ======================================================== 00:25:12.187 Total : 2379.40 594.85 111790.22 54878.22 314127.60 00:25:12.187 00:25:12.187 20:39:04 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:12.187 EAL: No free 2048 kB hugepages reported on node 1 00:25:12.187 No valid NVMe controllers or AIO or URING devices found 00:25:12.187 Initializing NVMe Controllers 00:25:12.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:12.187 Controller IO queue size 128, less than required. 00:25:12.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.187 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:12.187 Controller IO queue size 128, less than required. 00:25:12.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:12.187 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:12.187 WARNING: Some requested NVMe devices were skipped 00:25:12.187 20:39:04 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:12.187 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.734 Initializing NVMe Controllers 00:25:14.734 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.734 Controller IO queue size 128, less than required. 00:25:14.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.734 Controller IO queue size 128, less than required. 00:25:14.734 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:14.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:14.734 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:14.734 Initialization complete. Launching workers. 00:25:14.734 00:25:14.734 ==================== 00:25:14.734 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:14.734 TCP transport: 00:25:14.734 polls: 23528 00:25:14.734 idle_polls: 8636 00:25:14.734 sock_completions: 14892 00:25:14.734 nvme_completions: 4777 00:25:14.734 submitted_requests: 7154 00:25:14.734 queued_requests: 1 00:25:14.734 00:25:14.734 ==================== 00:25:14.734 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:14.734 TCP transport: 00:25:14.734 polls: 23321 00:25:14.734 idle_polls: 8191 00:25:14.734 sock_completions: 15130 00:25:14.734 nvme_completions: 7677 00:25:14.734 submitted_requests: 11392 00:25:14.734 queued_requests: 1 00:25:14.734 ======================================================== 00:25:14.734 Latency(us) 00:25:14.734 Device Information : IOPS MiB/s Average min max 00:25:14.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1193.87 298.47 109459.03 54537.65 188911.86 00:25:14.734 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1918.79 479.70 67322.82 33452.18 103598.45 00:25:14.734 ======================================================== 00:25:14.734 Total : 3112.66 778.17 83484.28 33452.18 188911.86 00:25:14.734 00:25:14.734 20:39:06 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:14.734 20:39:06 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:14.734 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:14.734 rmmod nvme_tcp 00:25:14.995 rmmod nvme_fabrics 00:25:14.995 rmmod nvme_keyring 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1444123 ']' 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1444123 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1444123 ']' 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1444123 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1444123 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1444123' 00:25:14.995 killing process with pid 1444123 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1444123 00:25:14.995 20:39:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1444123 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:16.911 20:39:09 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.457 20:39:11 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:19.457 00:25:19.457 real 0m24.592s 00:25:19.457 user 0m56.660s 00:25:19.457 sys 0m8.941s 00:25:19.457 20:39:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:19.457 20:39:11 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:19.457 ************************************ 00:25:19.457 END TEST nvmf_perf 00:25:19.457 ************************************ 00:25:19.457 20:39:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:19.457 20:39:11 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:19.457 20:39:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:19.457 20:39:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:19.457 20:39:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.457 ************************************ 00:25:19.457 START TEST nvmf_fio_host 00:25:19.457 ************************************ 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:19.457 * Looking for test storage... 00:25:19.457 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:19.457 20:39:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:25:27.598 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:27.599 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:27.599 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:27.599 Found net devices under 0000:31:00.0: cvl_0_0 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:27.599 Found net devices under 0000:31:00.1: cvl_0_1 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:27.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.356 ms 00:25:27.599 00:25:27.599 --- 10.0.0.2 ping statistics --- 00:25:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.599 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:27.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:25:27.599 00:25:27.599 --- 10.0.0.1 ping statistics --- 00:25:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.599 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1452084 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1452084 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1452084 ']' 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.599 20:39:19 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.599 [2024-07-15 20:39:19.710114] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:25:27.599 [2024-07-15 20:39:19.710180] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.599 EAL: No free 2048 kB hugepages reported on node 1 00:25:27.599 [2024-07-15 20:39:19.791028] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:27.599 [2024-07-15 20:39:19.865156] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.599 [2024-07-15 20:39:19.865195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.600 [2024-07-15 20:39:19.865202] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.600 [2024-07-15 20:39:19.865209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.600 [2024-07-15 20:39:19.865215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.600 [2024-07-15 20:39:19.865293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.600 [2024-07-15 20:39:19.865419] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.600 [2024-07-15 20:39:19.865575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.600 [2024-07-15 20:39:19.865576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:28.171 20:39:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:28.171 20:39:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:25:28.171 20:39:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:28.432 [2024-07-15 20:39:20.635154] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:28.432 20:39:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:28.432 20:39:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:28.432 20:39:20 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.432 20:39:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:28.693 Malloc1 00:25:28.693 20:39:20 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:28.693 20:39:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:28.954 20:39:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:29.214 [2024-07-15 20:39:21.372648] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:29.214 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:29.498 20:39:21 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:29.760 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:29.760 fio-3.35 00:25:29.760 Starting 1 thread 00:25:29.760 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.295 00:25:32.295 test: (groupid=0, jobs=1): err= 0: pid=1452617: Mon Jul 15 20:39:24 2024 00:25:32.295 read: IOPS=11.1k, BW=43.5MiB/s (45.6MB/s)(87.2MiB/2004msec) 00:25:32.295 slat (usec): min=2, max=288, avg= 2.21, stdev= 2.67 00:25:32.295 clat (usec): min=3676, max=9338, avg=6357.26, stdev=1182.22 00:25:32.295 lat (usec): min=3712, max=9344, avg=6359.47, stdev=1182.24 00:25:32.295 clat percentiles (usec): 00:25:32.295 | 1.00th=[ 4359], 5.00th=[ 4621], 10.00th=[ 4817], 20.00th=[ 5014], 00:25:32.295 | 30.00th=[ 5211], 40.00th=[ 6063], 50.00th=[ 6783], 60.00th=[ 7046], 00:25:32.295 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7701], 95.00th=[ 7898], 00:25:32.295 | 99.00th=[ 8291], 99.50th=[ 8455], 99.90th=[ 8979], 99.95th=[ 9110], 00:25:32.295 | 99.99th=[ 9241] 00:25:32.295 bw ( KiB/s): min=38480, max=56752, per=99.85%, avg=44504.00, stdev=8429.46, samples=4 00:25:32.295 iops : min= 9620, max=14188, avg=11126.00, stdev=2107.36, samples=4 00:25:32.295 write: IOPS=11.1k, BW=43.4MiB/s (45.5MB/s)(86.9MiB/2004msec); 0 zone resets 00:25:32.295 slat (usec): min=2, max=269, avg= 2.32, stdev= 1.99 00:25:32.295 clat (usec): min=2898, max=7938, avg=5117.63, stdev=943.28 00:25:32.295 lat (usec): min=2916, max=8178, avg=5119.95, stdev=943.34 00:25:32.295 clat percentiles (usec): 00:25:32.295 | 1.00th=[ 3523], 5.00th=[ 3752], 10.00th=[ 3851], 20.00th=[ 4047], 00:25:32.295 | 30.00th=[ 4228], 40.00th=[ 4817], 50.00th=[ 5473], 60.00th=[ 5669], 00:25:32.295 | 70.00th=[ 5800], 80.00th=[ 5997], 90.00th=[ 6194], 95.00th=[ 6390], 00:25:32.295 | 99.00th=[ 6718], 99.50th=[ 6849], 99.90th=[ 7308], 99.95th=[ 7504], 00:25:32.295 | 99.99th=[ 7832] 00:25:32.295 bw ( KiB/s): min=38896, max=56384, per=99.98%, avg=44386.00, stdev=8253.48, samples=4 00:25:32.295 iops : min= 9724, max=14096, avg=11096.50, stdev=2063.37, samples=4 00:25:32.295 lat (msec) : 4=8.70%, 10=91.30% 00:25:32.295 cpu : usr=70.49%, sys=27.31%, ctx=52, majf=0, minf=6 00:25:32.295 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:32.295 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:32.295 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:32.296 issued rwts: total=22330,22242,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:32.296 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:32.296 00:25:32.296 Run status group 0 (all jobs): 00:25:32.296 READ: bw=43.5MiB/s (45.6MB/s), 43.5MiB/s-43.5MiB/s (45.6MB/s-45.6MB/s), io=87.2MiB (91.5MB), run=2004-2004msec 00:25:32.296 WRITE: bw=43.4MiB/s (45.5MB/s), 43.4MiB/s-43.4MiB/s (45.5MB/s-45.5MB/s), io=86.9MiB (91.1MB), run=2004-2004msec 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:32.296 20:39:24 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:32.296 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:32.296 fio-3.35 00:25:32.296 Starting 1 thread 00:25:32.555 EAL: No free 2048 kB hugepages reported on node 1 00:25:35.098 00:25:35.098 test: (groupid=0, jobs=1): err= 0: pid=1453442: Mon Jul 15 20:39:27 2024 00:25:35.098 read: IOPS=9112, BW=142MiB/s (149MB/s)(285MiB/2005msec) 00:25:35.098 slat (usec): min=3, max=111, avg= 3.63, stdev= 1.62 00:25:35.098 clat (usec): min=1139, max=15108, avg=8583.54, stdev=1947.92 00:25:35.098 lat (usec): min=1142, max=15112, avg=8587.17, stdev=1948.02 00:25:35.098 clat percentiles (usec): 00:25:35.098 | 1.00th=[ 4359], 5.00th=[ 5473], 10.00th=[ 5997], 20.00th=[ 6849], 00:25:35.098 | 30.00th=[ 7439], 40.00th=[ 8094], 50.00th=[ 8717], 60.00th=[ 9241], 00:25:35.098 | 70.00th=[ 9896], 80.00th=[10290], 90.00th=[10814], 95.00th=[11469], 00:25:35.098 | 99.00th=[13304], 99.50th=[13829], 99.90th=[14746], 99.95th=[14746], 00:25:35.098 | 99.99th=[15008] 00:25:35.098 bw ( KiB/s): min=66944, max=80832, per=49.18%, avg=71696.00, stdev=6268.65, samples=4 00:25:35.098 iops : min= 4184, max= 5052, avg=4481.00, stdev=391.79, samples=4 00:25:35.098 write: IOPS=5329, BW=83.3MiB/s (87.3MB/s)(146MiB/1754msec); 0 zone resets 00:25:35.098 slat (usec): min=40, max=327, avg=41.10, stdev= 7.51 00:25:35.098 clat (usec): min=3294, max=16058, avg=9509.93, stdev=1486.46 00:25:35.098 lat (usec): min=3335, max=16099, avg=9551.03, stdev=1487.97 00:25:35.098 clat percentiles (usec): 00:25:35.098 | 1.00th=[ 6587], 5.00th=[ 7439], 10.00th=[ 7767], 20.00th=[ 8291], 00:25:35.098 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:25:35.098 | 70.00th=[10159], 80.00th=[10683], 90.00th=[11338], 95.00th=[12125], 00:25:35.098 | 99.00th=[13960], 99.50th=[14353], 99.90th=[15926], 99.95th=[15926], 00:25:35.098 | 99.99th=[16057] 00:25:35.098 bw ( KiB/s): min=69888, max=83616, per=87.23%, avg=74384.00, stdev=6266.04, samples=4 00:25:35.098 iops : min= 4368, max= 5226, avg=4649.00, stdev=391.63, samples=4 00:25:35.098 lat (msec) : 2=0.06%, 4=0.29%, 10=70.94%, 20=28.71% 00:25:35.098 cpu : usr=83.38%, sys=14.37%, ctx=15, majf=0, minf=21 00:25:35.098 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:35.098 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:35.098 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:35.098 issued rwts: total=18270,9348,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:35.098 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:35.098 00:25:35.098 Run status group 0 (all jobs): 00:25:35.098 READ: bw=142MiB/s (149MB/s), 142MiB/s-142MiB/s (149MB/s-149MB/s), io=285MiB (299MB), run=2005-2005msec 00:25:35.098 WRITE: bw=83.3MiB/s (87.3MB/s), 83.3MiB/s-83.3MiB/s (87.3MB/s-87.3MB/s), io=146MiB (153MB), run=1754-1754msec 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:35.098 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:35.099 rmmod nvme_tcp 00:25:35.099 rmmod nvme_fabrics 00:25:35.099 rmmod nvme_keyring 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1452084 ']' 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1452084 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1452084 ']' 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1452084 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1452084 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1452084' 00:25:35.099 killing process with pid 1452084 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1452084 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1452084 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:35.099 20:39:27 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.643 20:39:29 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:37.643 00:25:37.643 real 0m18.223s 00:25:37.643 user 1m3.109s 00:25:37.643 sys 0m8.166s 00:25:37.643 20:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:37.643 20:39:29 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.643 ************************************ 00:25:37.643 END TEST nvmf_fio_host 00:25:37.643 ************************************ 00:25:37.643 20:39:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:37.643 20:39:29 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:37.643 20:39:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:37.643 20:39:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:37.643 20:39:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:37.643 ************************************ 00:25:37.643 START TEST nvmf_failover 00:25:37.643 ************************************ 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:37.643 * Looking for test storage... 00:25:37.643 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:37.643 20:39:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:37.644 20:39:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:37.644 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:37.644 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:37.644 20:39:29 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:25:37.644 20:39:29 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:45.782 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:45.782 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:25:45.783 Found 0000:31:00.0 (0x8086 - 0x159b) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:25:45.783 Found 0000:31:00.1 (0x8086 - 0x159b) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:25:45.783 Found net devices under 0000:31:00.0: cvl_0_0 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:25:45.783 Found net devices under 0000:31:00.1: cvl_0_1 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:45.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:45.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.680 ms 00:25:45.783 00:25:45.783 --- 10.0.0.2 ping statistics --- 00:25:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.783 rtt min/avg/max/mdev = 0.680/0.680/0.680/0.000 ms 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:45.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:45.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:25:45.783 00:25:45.783 --- 10.0.0.1 ping statistics --- 00:25:45.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:45.783 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1458447 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1458447 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1458447 ']' 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:45.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:45.783 20:39:37 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:45.783 [2024-07-15 20:39:37.872042] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:25:45.783 [2024-07-15 20:39:37.872089] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:45.783 EAL: No free 2048 kB hugepages reported on node 1 00:25:45.783 [2024-07-15 20:39:37.962248] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:45.784 [2024-07-15 20:39:38.026798] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:45.784 [2024-07-15 20:39:38.026836] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:45.784 [2024-07-15 20:39:38.026843] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:45.784 [2024-07-15 20:39:38.026850] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:45.784 [2024-07-15 20:39:38.026855] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:45.784 [2024-07-15 20:39:38.026964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:45.784 [2024-07-15 20:39:38.027118] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:45.784 [2024-07-15 20:39:38.027119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:46.355 20:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:46.355 20:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:46.355 20:39:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:46.355 20:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:46.355 20:39:38 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:46.355 20:39:38 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:46.355 20:39:38 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:46.615 [2024-07-15 20:39:38.866950] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:46.615 20:39:38 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:46.874 Malloc0 00:25:46.874 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:46.874 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:47.134 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.394 [2024-07-15 20:39:39.547746] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.394 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:47.394 [2024-07-15 20:39:39.712179] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:47.395 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:47.655 [2024-07-15 20:39:39.876663] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1458822 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1458822 /var/tmp/bdevperf.sock 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1458822 ']' 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:47.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.655 20:39:39 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:48.694 20:39:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:48.694 20:39:40 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:25:48.694 20:39:40 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:48.954 NVMe0n1 00:25:48.954 20:39:41 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:49.214 00:25:49.214 20:39:41 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1459158 00:25:49.214 20:39:41 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:49.214 20:39:41 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:50.157 20:39:42 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.418 [2024-07-15 20:39:42.600310] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600379] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600384] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600388] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600402] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600406] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600420] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600429] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600434] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 [2024-07-15 20:39:42.600438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1083770 is same with the state(5) to be set 00:25:50.418 20:39:42 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:25:53.718 20:39:45 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:25:53.718 00:25:53.718 20:39:45 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:53.718 [2024-07-15 20:39:46.081956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.081993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.081999] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082003] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082008] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082012] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082021] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082047] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082051] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082056] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082069] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082073] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082077] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082094] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082099] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082103] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082107] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082121] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.718 [2024-07-15 20:39:46.082125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1084e70 is same with the state(5) to be set 00:25:53.978 20:39:46 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:25:57.279 20:39:49 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:57.279 [2024-07-15 20:39:49.259627] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:57.279 20:39:49 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:25:58.220 20:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:58.220 [2024-07-15 20:39:50.436967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437015] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437020] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437025] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437034] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437044] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437052] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437057] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437066] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437071] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437080] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437089] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 [2024-07-15 20:39:50.437093] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085550 is same with the state(5) to be set 00:25:58.220 20:39:50 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1459158 00:26:04.807 0 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1458822 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1458822 ']' 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1458822 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458822 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458822' 00:26:04.807 killing process with pid 1458822 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1458822 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1458822 00:26:04.807 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:04.807 [2024-07-15 20:39:39.945007] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:26:04.807 [2024-07-15 20:39:39.945070] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1458822 ] 00:26:04.807 EAL: No free 2048 kB hugepages reported on node 1 00:26:04.807 [2024-07-15 20:39:40.018880] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.807 [2024-07-15 20:39:40.085634] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:04.807 Running I/O for 15 seconds... 00:26:04.807 [2024-07-15 20:39:42.601093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:96552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.807 [2024-07-15 20:39:42.601128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.807 [2024-07-15 20:39:42.601144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:96560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.807 [2024-07-15 20:39:42.601153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.807 [2024-07-15 20:39:42.601162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:96568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.807 [2024-07-15 20:39:42.601169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.807 [2024-07-15 20:39:42.601178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:96576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:96584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:96592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:96600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:96608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:96616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:96624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:96632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:96640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:96648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:96656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:96664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:96672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:96680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:96696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:96704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:96712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:96720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:96728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:96736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:96744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:96752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:96760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:96768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:96776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:96784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:96792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:96808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:95920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:95928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:95936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:95944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:95952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:95960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:95968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:95976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.808 [2024-07-15 20:39:42.601795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:96816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.808 [2024-07-15 20:39:42.601811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.808 [2024-07-15 20:39:42.601819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:95984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:95992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.601984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.601992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.809 [2024-07-15 20:39:42.602496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.809 [2024-07-15 20:39:42.602503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.810 [2024-07-15 20:39:42.602975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.602984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:96824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.602991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:96832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:96840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:96848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:96856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:96864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:96872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:96880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:96888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:96896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:96904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.810 [2024-07-15 20:39:42.603150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.810 [2024-07-15 20:39:42.603159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:42.603167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:96920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:42.603183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:96928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:42.603199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603217] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.811 [2024-07-15 20:39:42.603224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.811 [2024-07-15 20:39:42.603232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96936 len:8 PRP1 0x0 PRP2 0x0 00:26:04.811 [2024-07-15 20:39:42.603241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603279] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf2f100 was disconnected and freed. reset controller. 00:26:04.811 [2024-07-15 20:39:42.603288] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:04.811 [2024-07-15 20:39:42.603308] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:42.603316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:42.603331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:42.603346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603354] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:42.603361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:42.603368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.811 [2024-07-15 20:39:42.606896] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.811 [2024-07-15 20:39:42.606919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf08ea0 (9): Bad file descriptor 00:26:04.811 [2024-07-15 20:39:42.778603] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:04.811 [2024-07-15 20:39:46.083885] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:46.083923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.083934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:46.083942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.083958] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:46.083965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.083974] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.811 [2024-07-15 20:39:46.083981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.083988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf08ea0 is same with the state(5) to be set 00:26:04.811 [2024-07-15 20:39:46.084051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:51792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.811 [2024-07-15 20:39:46.084062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:51928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:51936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:51944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:51952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:51960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:51968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:51976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:51984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:51992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:52000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:52008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:52016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:52024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:52032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:52040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:52048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:52056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:52064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:52072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:52080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:52088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:52096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:52104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:52112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.811 [2024-07-15 20:39:46.084478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:52120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.811 [2024-07-15 20:39:46.084484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:52128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:52136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:52144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:52152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:52160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:52168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:52176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:52184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:52192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:52200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:52208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:52216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:52224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:52232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:52240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:52248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:52256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:52264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:52272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:52280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:52288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:52296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:52304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:52312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:52320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:52328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:52336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:52344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:52352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:52360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:52368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.812 [2024-07-15 20:39:46.084983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.812 [2024-07-15 20:39:46.084991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:52376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.084999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:52384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:52392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:52400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:52408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:52416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:52424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:52432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:52440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:52448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:52456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:52464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:52472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:52480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:52488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:52496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:52504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:52512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:52520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:52528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:52536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:52544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:52552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:52560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:52568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:52576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:52584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:52592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:52608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:52616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:52624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:52632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:52640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:52648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:52656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:52664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:52672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:52680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:52688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:52696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.813 [2024-07-15 20:39:46.085653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:52704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.813 [2024-07-15 20:39:46.085662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:52712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.085678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:52720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.085694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:52728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.085710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:52736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.085725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:51800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:51808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:51816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:51824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:51832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:51840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:51848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:51856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:51864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:51872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:51880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:51888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:51896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:51904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:51912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:51920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:46.085983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.085992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:52744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.085999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:52752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.086014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:52760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.086030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:52768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.086046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:52776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.086065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:52784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.086083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:52792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.086099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:52800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:46.086115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.814 [2024-07-15 20:39:46.086139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.814 [2024-07-15 20:39:46.086145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:52808 len:8 PRP1 0x0 PRP2 0x0 00:26:04.814 [2024-07-15 20:39:46.086152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:46.086188] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf378f0 was disconnected and freed. reset controller. 00:26:04.814 [2024-07-15 20:39:46.086197] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:04.814 [2024-07-15 20:39:46.086205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.814 [2024-07-15 20:39:46.089700] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.814 [2024-07-15 20:39:46.089724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf08ea0 (9): Bad file descriptor 00:26:04.814 [2024-07-15 20:39:46.211934] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:04.814 [2024-07-15 20:39:50.440179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.814 [2024-07-15 20:39:50.440217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:79960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:79968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:79984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.814 [2024-07-15 20:39:50.440369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.814 [2024-07-15 20:39:50.440378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:80000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:80016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:80024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:80032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:80040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:80048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:80080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:80088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:80112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:80120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:80128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:80136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:80144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:80152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:80160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:80168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:80184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:80192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:80216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:80248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:80264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:80272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:80280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:80296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.440989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.440996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.441005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:80312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.441011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.441020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:80320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.441027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.441036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.441043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.815 [2024-07-15 20:39:50.441052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.815 [2024-07-15 20:39:50.441059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:80344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:80352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.816 [2024-07-15 20:39:50.441577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.816 [2024-07-15 20:39:50.441585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.817 [2024-07-15 20:39:50.441593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.817 [2024-07-15 20:39:50.441609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441629] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80616 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441659] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80624 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80632 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80640 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441730] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441735] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80648 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441756] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80656 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80664 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441813] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80672 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441833] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441838] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80680 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80688 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441889] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80696 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441914] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80704 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80712 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80720 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.441978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.441986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.441991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.441997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80728 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80736 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80744 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80752 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80760 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80768 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442136] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442142] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80776 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80784 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.817 [2024-07-15 20:39:50.442187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.817 [2024-07-15 20:39:50.442193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.817 [2024-07-15 20:39:50.442199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80792 len:8 PRP1 0x0 PRP2 0x0 00:26:04.817 [2024-07-15 20:39:50.442206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442213] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80800 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80808 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442268] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80816 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80824 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442318] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442324] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80832 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442350] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80840 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80848 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80856 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442421] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442427] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80864 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442446] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80872 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442472] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442477] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80880 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442496] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442501] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80888 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.442521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.442526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.442532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80896 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.442541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.452764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.452790] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.452800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80904 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.452809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.452817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.452822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.452828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80912 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.452835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.452842] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.452848] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.452854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80920 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.452861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.452869] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.452874] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.452880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80928 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.452887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.452894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.452899] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.452905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80936 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.452911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.452919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.818 [2024-07-15 20:39:50.452924] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.818 [2024-07-15 20:39:50.452930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80944 len:8 PRP1 0x0 PRP2 0x0 00:26:04.818 [2024-07-15 20:39:50.452936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.452977] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf455b0 was disconnected and freed. reset controller. 00:26:04.818 [2024-07-15 20:39:50.452987] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:04.818 [2024-07-15 20:39:50.453015] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.818 [2024-07-15 20:39:50.453022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.453032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.818 [2024-07-15 20:39:50.453043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.453051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.818 [2024-07-15 20:39:50.453058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.453065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:04.818 [2024-07-15 20:39:50.453072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.818 [2024-07-15 20:39:50.453080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:04.818 [2024-07-15 20:39:50.453120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf08ea0 (9): Bad file descriptor 00:26:04.818 [2024-07-15 20:39:50.456626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:04.818 [2024-07-15 20:39:50.626337] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:04.818 00:26:04.818 Latency(us) 00:26:04.818 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.818 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:04.818 Verification LBA range: start 0x0 length 0x4000 00:26:04.818 NVMe0n1 : 15.00 11038.28 43.12 1113.67 0.00 10505.64 764.59 19333.12 00:26:04.818 =================================================================================================================== 00:26:04.818 Total : 11038.28 43.12 1113.67 0.00 10505.64 764.59 19333.12 00:26:04.819 Received shutdown signal, test time was about 15.000000 seconds 00:26:04.819 00:26:04.819 Latency(us) 00:26:04.819 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:04.819 =================================================================================================================== 00:26:04.819 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1462168 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1462168 /var/tmp/bdevperf.sock 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1462168 ']' 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:04.819 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:04.819 20:39:56 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:05.390 20:39:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:05.390 20:39:57 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:26:05.390 20:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:05.390 [2024-07-15 20:39:57.757962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:05.651 20:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:05.651 [2024-07-15 20:39:57.930354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:05.651 20:39:57 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:05.911 NVMe0n1 00:26:05.911 20:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.172 00:26:06.172 20:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.432 00:26:06.432 20:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:06.432 20:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:06.692 20:39:58 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:06.692 20:39:59 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:09.992 20:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:09.992 20:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:09.992 20:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1463182 00:26:09.992 20:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1463182 00:26:09.992 20:40:02 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:10.935 0 00:26:10.935 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:10.935 [2024-07-15 20:39:56.839513] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:26:10.935 [2024-07-15 20:39:56.839570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1462168 ] 00:26:10.935 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.935 [2024-07-15 20:39:56.905782] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:10.935 [2024-07-15 20:39:56.968723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:10.935 [2024-07-15 20:39:58.989472] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:10.935 [2024-07-15 20:39:58.989516] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.935 [2024-07-15 20:39:58.989528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.935 [2024-07-15 20:39:58.989537] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.935 [2024-07-15 20:39:58.989545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.935 [2024-07-15 20:39:58.989553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.935 [2024-07-15 20:39:58.989560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.935 [2024-07-15 20:39:58.989568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:10.935 [2024-07-15 20:39:58.989575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:10.935 [2024-07-15 20:39:58.989582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:26:10.935 [2024-07-15 20:39:58.989609] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:26:10.935 [2024-07-15 20:39:58.989623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b7ea0 (9): Bad file descriptor 00:26:10.935 [2024-07-15 20:39:59.001028] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:10.935 Running I/O for 1 seconds... 00:26:10.935 00:26:10.935 Latency(us) 00:26:10.935 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:10.935 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:10.935 Verification LBA range: start 0x0 length 0x4000 00:26:10.935 NVMe0n1 : 1.00 11181.04 43.68 0.00 0.00 11396.10 1870.51 10103.47 00:26:10.935 =================================================================================================================== 00:26:10.935 Total : 11181.04 43.68 0.00 0.00 11396.10 1870.51 10103.47 00:26:10.935 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:10.935 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:11.195 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:11.455 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:11.455 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:11.455 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:11.716 20:40:03 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:15.013 20:40:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.013 20:40:06 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1462168 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1462168 ']' 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1462168 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1462168 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1462168' 00:26:15.013 killing process with pid 1462168 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1462168 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1462168 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:15.013 20:40:07 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:15.277 rmmod nvme_tcp 00:26:15.277 rmmod nvme_fabrics 00:26:15.277 rmmod nvme_keyring 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1458447 ']' 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1458447 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1458447 ']' 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1458447 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1458447 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1458447' 00:26:15.277 killing process with pid 1458447 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1458447 00:26:15.277 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1458447 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:15.538 20:40:07 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.493 20:40:09 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:17.493 00:26:17.493 real 0m40.189s 00:26:17.493 user 2m1.774s 00:26:17.493 sys 0m8.589s 00:26:17.493 20:40:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:17.493 20:40:09 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:17.493 ************************************ 00:26:17.493 END TEST nvmf_failover 00:26:17.493 ************************************ 00:26:17.493 20:40:09 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:17.493 20:40:09 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:17.493 20:40:09 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:17.493 20:40:09 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:17.493 20:40:09 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:17.754 ************************************ 00:26:17.754 START TEST nvmf_host_discovery 00:26:17.754 ************************************ 00:26:17.754 20:40:09 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:17.754 * Looking for test storage... 00:26:17.754 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:17.754 20:40:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:26:17.755 20:40:10 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:25.894 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:25.894 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:25.894 Found net devices under 0000:31:00.0: cvl_0_0 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:25.894 Found net devices under 0000:31:00.1: cvl_0_1 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:25.894 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:25.895 20:40:17 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:25.895 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:25.895 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.550 ms 00:26:25.895 00:26:25.895 --- 10.0.0.2 ping statistics --- 00:26:25.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.895 rtt min/avg/max/mdev = 0.550/0.550/0.550/0.000 ms 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:25.895 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:25.895 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:26:25.895 00:26:25.895 --- 10.0.0.1 ping statistics --- 00:26:25.895 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:25.895 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1468868 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1468868 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1468868 ']' 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:25.895 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:25.895 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:25.895 [2024-07-15 20:40:18.200584] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:26:25.895 [2024-07-15 20:40:18.200646] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:25.895 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.156 [2024-07-15 20:40:18.299354] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.156 [2024-07-15 20:40:18.391403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.156 [2024-07-15 20:40:18.391466] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.156 [2024-07-15 20:40:18.391474] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.156 [2024-07-15 20:40:18.391481] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.156 [2024-07-15 20:40:18.391487] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.156 [2024-07-15 20:40:18.391510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:26.729 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:26.729 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:26.729 20:40:18 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:26.729 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:26.729 20:40:18 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.729 [2024-07-15 20:40:19.030818] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.729 [2024-07-15 20:40:19.039001] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.729 null0 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.729 null1 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1469051 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1469051 /tmp/host.sock 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1469051 ']' 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:26.729 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:26.729 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:26.990 [2024-07-15 20:40:19.121189] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:26:26.990 [2024-07-15 20:40:19.121260] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1469051 ] 00:26:26.990 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.990 [2024-07-15 20:40:19.192286] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.990 [2024-07-15 20:40:19.266208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:27.562 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.822 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:27.822 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:27.822 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.822 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.822 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.822 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.823 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.823 20:40:19 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.823 20:40:19 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:27.823 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:28.083 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 [2024-07-15 20:40:20.258091] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:28.084 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:28.345 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:26:28.345 20:40:20 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:28.606 [2024-07-15 20:40:20.948440] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:28.606 [2024-07-15 20:40:20.948461] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:28.606 [2024-07-15 20:40:20.948474] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:28.867 [2024-07-15 20:40:21.036757] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:28.867 [2024-07-15 20:40:21.222640] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:28.867 [2024-07-15 20:40:21.222663] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:29.128 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:29.388 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:29.389 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.649 [2024-07-15 20:40:21.814302] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:29.649 [2024-07-15 20:40:21.815375] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:29.649 [2024-07-15 20:40:21.815400] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:29.649 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:29.650 [2024-07-15 20:40:21.903662] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:29.650 [2024-07-15 20:40:21.964280] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:29.650 [2024-07-15 20:40:21.964297] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:29.650 [2024-07-15 20:40:21.964302] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:29.650 20:40:21 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:31.033 20:40:22 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.033 [2024-07-15 20:40:23.097792] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:31.033 [2024-07-15 20:40:23.097815] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:31.033 [2024-07-15 20:40:23.104454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.033 [2024-07-15 20:40:23.104471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 20:40:23.104482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.033 [2024-07-15 20:40:23.104489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 20:40:23.104502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.033 [2024-07-15 20:40:23.104509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 20:40:23.104517] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:31.033 [2024-07-15 20:40:23.104524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:31.033 [2024-07-15 20:40:23.104531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.033 [2024-07-15 20:40:23.114468] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.033 [2024-07-15 20:40:23.124506] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.033 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.033 [2024-07-15 20:40:23.124878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-07-15 20:40:23.124892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.033 [2024-07-15 20:40:23.124900] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.033 [2024-07-15 20:40:23.124912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.033 [2024-07-15 20:40:23.124928] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.033 [2024-07-15 20:40:23.124935] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.033 [2024-07-15 20:40:23.124943] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.033 [2024-07-15 20:40:23.124954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.033 [2024-07-15 20:40:23.134561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.033 [2024-07-15 20:40:23.134893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-07-15 20:40:23.134905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.033 [2024-07-15 20:40:23.134912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.033 [2024-07-15 20:40:23.134923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.033 [2024-07-15 20:40:23.134933] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.033 [2024-07-15 20:40:23.134939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.033 [2024-07-15 20:40:23.134946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.033 [2024-07-15 20:40:23.134956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.033 [2024-07-15 20:40:23.144613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.033 [2024-07-15 20:40:23.144952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-07-15 20:40:23.144963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.033 [2024-07-15 20:40:23.144971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.033 [2024-07-15 20:40:23.144982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.033 [2024-07-15 20:40:23.144992] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.033 [2024-07-15 20:40:23.144998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.033 [2024-07-15 20:40:23.145005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.033 [2024-07-15 20:40:23.145021] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.033 [2024-07-15 20:40:23.154668] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.033 [2024-07-15 20:40:23.154996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.033 [2024-07-15 20:40:23.155008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.033 [2024-07-15 20:40:23.155016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.033 [2024-07-15 20:40:23.155027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.033 [2024-07-15 20:40:23.155043] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.033 [2024-07-15 20:40:23.155050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.033 [2024-07-15 20:40:23.155057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.033 [2024-07-15 20:40:23.155068] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.034 [2024-07-15 20:40:23.164722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.034 [2024-07-15 20:40:23.165059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-07-15 20:40:23.165071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.034 [2024-07-15 20:40:23.165083] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.034 [2024-07-15 20:40:23.165094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.034 [2024-07-15 20:40:23.165110] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.034 [2024-07-15 20:40:23.165117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.034 [2024-07-15 20:40:23.165124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.034 [2024-07-15 20:40:23.165135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.034 [2024-07-15 20:40:23.174775] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.034 [2024-07-15 20:40:23.175114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-07-15 20:40:23.175125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.034 [2024-07-15 20:40:23.175132] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.034 [2024-07-15 20:40:23.175143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.034 [2024-07-15 20:40:23.175153] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.034 [2024-07-15 20:40:23.175159] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.034 [2024-07-15 20:40:23.175166] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.034 [2024-07-15 20:40:23.175176] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 [2024-07-15 20:40:23.184830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.034 [2024-07-15 20:40:23.185216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-07-15 20:40:23.185227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.034 [2024-07-15 20:40:23.185240] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.034 [2024-07-15 20:40:23.185251] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.034 [2024-07-15 20:40:23.185277] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.034 [2024-07-15 20:40:23.185284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.034 [2024-07-15 20:40:23.185291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.034 [2024-07-15 20:40:23.185301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 [2024-07-15 20:40:23.194883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.034 [2024-07-15 20:40:23.195216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-07-15 20:40:23.195227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.034 [2024-07-15 20:40:23.195239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.034 [2024-07-15 20:40:23.195250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.034 [2024-07-15 20:40:23.195263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.034 [2024-07-15 20:40:23.195269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.034 [2024-07-15 20:40:23.195276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.034 [2024-07-15 20:40:23.195286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.034 [2024-07-15 20:40:23.204934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.034 [2024-07-15 20:40:23.205269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-07-15 20:40:23.205281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.034 [2024-07-15 20:40:23.205288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.034 [2024-07-15 20:40:23.205298] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.034 [2024-07-15 20:40:23.205308] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.034 [2024-07-15 20:40:23.205314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.034 [2024-07-15 20:40:23.205320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.034 [2024-07-15 20:40:23.205331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:31.034 [2024-07-15 20:40:23.214984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:31.034 [2024-07-15 20:40:23.215462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-07-15 20:40:23.215499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.034 [2024-07-15 20:40:23.215510] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.034 [2024-07-15 20:40:23.215529] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.034 [2024-07-15 20:40:23.215554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.034 [2024-07-15 20:40:23.215562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.034 [2024-07-15 20:40:23.215570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.034 [2024-07-15 20:40:23.215584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:31.034 [2024-07-15 20:40:23.225037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:31.034 [2024-07-15 20:40:23.225531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.034 [2024-07-15 20:40:23.225568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15099a0 with addr=10.0.0.2, port=4420 00:26:31.034 [2024-07-15 20:40:23.225580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15099a0 is same with the state(5) to be set 00:26:31.034 [2024-07-15 20:40:23.225601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15099a0 (9): Bad file descriptor 00:26:31.034 [2024-07-15 20:40:23.225614] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:26:31.034 [2024-07-15 20:40:23.225622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:26:31.034 [2024-07-15 20:40:23.225631] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:26:31.034 [2024-07-15 20:40:23.225648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:26:31.034 [2024-07-15 20:40:23.225684] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:31.034 [2024-07-15 20:40:23.225701] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:26:31.034 20:40:23 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:31.973 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.974 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.234 20:40:24 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.214 [2024-07-15 20:40:25.561159] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:33.214 [2024-07-15 20:40:25.561177] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:33.214 [2024-07-15 20:40:25.561189] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.522 [2024-07-15 20:40:25.649486] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:33.782 [2024-07-15 20:40:25.921948] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:33.782 [2024-07-15 20:40:25.921979] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.782 request: 00:26:33.782 { 00:26:33.782 "name": "nvme", 00:26:33.782 "trtype": "tcp", 00:26:33.782 "traddr": "10.0.0.2", 00:26:33.782 "adrfam": "ipv4", 00:26:33.782 "trsvcid": "8009", 00:26:33.782 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:33.782 "wait_for_attach": true, 00:26:33.782 "method": "bdev_nvme_start_discovery", 00:26:33.782 "req_id": 1 00:26:33.782 } 00:26:33.782 Got JSON-RPC error response 00:26:33.782 response: 00:26:33.782 { 00:26:33.782 "code": -17, 00:26:33.782 "message": "File exists" 00:26:33.782 } 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.782 20:40:25 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:33.782 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.783 request: 00:26:33.783 { 00:26:33.783 "name": "nvme_second", 00:26:33.783 "trtype": "tcp", 00:26:33.783 "traddr": "10.0.0.2", 00:26:33.783 "adrfam": "ipv4", 00:26:33.783 "trsvcid": "8009", 00:26:33.783 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:33.783 "wait_for_attach": true, 00:26:33.783 "method": "bdev_nvme_start_discovery", 00:26:33.783 "req_id": 1 00:26:33.783 } 00:26:33.783 Got JSON-RPC error response 00:26:33.783 response: 00:26:33.783 { 00:26:33.783 "code": -17, 00:26:33.783 "message": "File exists" 00:26:33.783 } 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.783 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:34.043 20:40:26 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.985 [2024-07-15 20:40:27.189497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.985 [2024-07-15 20:40:27.189530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1505590 with addr=10.0.0.2, port=8010 00:26:34.985 [2024-07-15 20:40:27.189542] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:34.985 [2024-07-15 20:40:27.189549] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:34.985 [2024-07-15 20:40:27.189556] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:35.926 [2024-07-15 20:40:28.191878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:35.926 [2024-07-15 20:40:28.191900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1505590 with addr=10.0.0.2, port=8010 00:26:35.926 [2024-07-15 20:40:28.191911] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:35.926 [2024-07-15 20:40:28.191917] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:35.926 [2024-07-15 20:40:28.191924] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:36.866 [2024-07-15 20:40:29.193826] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:36.866 request: 00:26:36.866 { 00:26:36.866 "name": "nvme_second", 00:26:36.866 "trtype": "tcp", 00:26:36.866 "traddr": "10.0.0.2", 00:26:36.866 "adrfam": "ipv4", 00:26:36.866 "trsvcid": "8010", 00:26:36.866 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:36.866 "wait_for_attach": false, 00:26:36.866 "attach_timeout_ms": 3000, 00:26:36.866 "method": "bdev_nvme_start_discovery", 00:26:36.866 "req_id": 1 00:26:36.866 } 00:26:36.866 Got JSON-RPC error response 00:26:36.866 response: 00:26:36.866 { 00:26:36.866 "code": -110, 00:26:36.866 "message": "Connection timed out" 00:26:36.866 } 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:36.866 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1469051 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:37.126 rmmod nvme_tcp 00:26:37.126 rmmod nvme_fabrics 00:26:37.126 rmmod nvme_keyring 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1468868 ']' 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1468868 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1468868 ']' 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1468868 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1468868 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1468868' 00:26:37.126 killing process with pid 1468868 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1468868 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1468868 00:26:37.126 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:37.127 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:37.127 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:37.127 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:37.127 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:37.127 20:40:29 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:37.127 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:37.127 20:40:29 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:39.668 00:26:39.668 real 0m21.667s 00:26:39.668 user 0m25.658s 00:26:39.668 sys 0m7.363s 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.668 ************************************ 00:26:39.668 END TEST nvmf_host_discovery 00:26:39.668 ************************************ 00:26:39.668 20:40:31 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:39.668 20:40:31 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:39.668 20:40:31 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:39.668 20:40:31 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:39.668 20:40:31 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:39.668 ************************************ 00:26:39.668 START TEST nvmf_host_multipath_status 00:26:39.668 ************************************ 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:39.668 * Looking for test storage... 00:26:39.668 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:39.668 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:26:39.669 20:40:31 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:47.806 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:26:47.807 Found 0000:31:00.0 (0x8086 - 0x159b) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:26:47.807 Found 0000:31:00.1 (0x8086 - 0x159b) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:26:47.807 Found net devices under 0000:31:00.0: cvl_0_0 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:26:47.807 Found net devices under 0000:31:00.1: cvl_0_1 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:47.807 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:47.807 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.564 ms 00:26:47.807 00:26:47.807 --- 10.0.0.2 ping statistics --- 00:26:47.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.807 rtt min/avg/max/mdev = 0.564/0.564/0.564/0.000 ms 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:47.807 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:47.807 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.347 ms 00:26:47.807 00:26:47.807 --- 10.0.0.1 ping statistics --- 00:26:47.807 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:47.807 rtt min/avg/max/mdev = 0.347/0.347/0.347/0.000 ms 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1475785 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1475785 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1475785 ']' 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:47.807 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:47.807 20:40:39 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:47.807 [2024-07-15 20:40:40.046202] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:26:47.807 [2024-07-15 20:40:40.046278] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:47.807 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.807 [2024-07-15 20:40:40.127154] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:48.068 [2024-07-15 20:40:40.200715] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:48.068 [2024-07-15 20:40:40.200754] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:48.068 [2024-07-15 20:40:40.200762] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:48.068 [2024-07-15 20:40:40.200768] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:48.068 [2024-07-15 20:40:40.200774] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:48.068 [2024-07-15 20:40:40.200910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:48.068 [2024-07-15 20:40:40.200912] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1475785 00:26:48.638 20:40:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:48.638 [2024-07-15 20:40:41.001038] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:48.638 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:48.897 Malloc0 00:26:48.897 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:49.157 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:49.157 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:49.417 [2024-07-15 20:40:41.633016] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:49.417 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:49.417 [2024-07-15 20:40:41.785345] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1476157 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1476157 /var/tmp/bdevperf.sock 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1476157 ']' 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:49.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:49.677 20:40:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:50.248 20:40:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:50.248 20:40:42 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:26:50.248 20:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:50.509 20:40:42 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:26:51.077 Nvme0n1 00:26:51.077 20:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:51.335 Nvme0n1 00:26:51.335 20:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:51.335 20:40:43 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:53.245 20:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:53.245 20:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:53.506 20:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:53.506 20:40:45 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:54.928 20:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:54.928 20:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:54.928 20:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.928 20:40:46 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:54.928 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.189 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.189 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.189 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.189 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:55.189 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.189 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:55.450 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.450 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:55.450 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.450 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:55.450 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.450 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:55.710 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.710 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:55.710 20:40:47 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:55.970 20:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:55.970 20:40:48 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:56.912 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:56.912 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:56.912 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.912 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:57.173 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:57.173 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:57.173 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.173 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.435 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:57.696 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.696 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:57.696 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.696 20:40:49 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:57.957 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.957 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:57.957 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:57.957 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:57.957 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:57.957 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:57.957 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:58.218 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:58.478 20:40:50 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:59.420 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:59.420 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:59.420 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.420 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:59.681 20:40:51 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.941 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:00.202 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.202 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:00.202 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:00.202 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:00.462 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:00.462 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:00.462 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:00.462 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:00.723 20:40:52 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:01.664 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:01.664 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:01.664 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.664 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:01.924 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.924 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:01.924 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.924 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.185 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.446 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.706 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.706 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:02.706 20:40:54 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:02.967 20:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:02.967 20:40:55 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:04.351 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.613 20:40:56 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:04.873 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.873 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:04.873 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.873 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:05.134 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.134 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:05.134 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:05.134 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:05.394 20:40:57 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:06.336 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:06.336 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:06.336 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.336 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:06.596 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.596 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:06.596 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.596 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:06.857 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.857 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:06.857 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.857 20:40:58 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:06.857 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:06.857 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:06.857 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.857 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.118 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:07.379 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.379 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:07.639 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:07.639 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:07.639 20:40:59 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:07.900 20:41:00 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:08.840 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:08.840 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:08.840 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.840 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.101 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:09.362 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.362 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:09.362 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.362 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.622 20:41:01 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.883 20:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.883 20:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:09.883 20:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:10.143 20:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:10.144 20:41:02 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:11.128 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:11.128 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:11.128 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.128 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.387 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:11.387 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:11.387 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.387 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.648 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.648 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.648 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.648 20:41:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:11.648 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.648 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:11.648 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.648 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:11.908 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.908 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:11.908 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.908 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.169 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.169 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:12.169 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.169 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.169 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.169 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:12.169 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:12.429 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:12.689 20:41:04 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:13.631 20:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:13.631 20:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:13.631 20:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.631 20:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:13.631 20:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.631 20:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:13.631 20:41:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.631 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:13.892 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.892 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:13.892 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.892 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:14.152 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.152 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:14.153 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.153 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:14.153 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.153 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:14.153 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.153 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:14.412 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.412 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:14.412 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.412 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:14.672 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:14.672 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:14.672 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:14.672 20:41:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:14.933 20:41:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:15.873 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:15.873 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:15.873 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.873 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.140 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:16.399 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.399 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:16.399 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.399 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:16.659 20:41:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1476157 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1476157 ']' 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1476157 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1476157 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1476157' 00:27:16.919 killing process with pid 1476157 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1476157 00:27:16.919 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1476157 00:27:16.919 Connection closed with partial response: 00:27:16.919 00:27:16.919 00:27:17.182 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1476157 00:27:17.182 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:17.182 [2024-07-15 20:40:41.856388] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:27:17.182 [2024-07-15 20:40:41.856457] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1476157 ] 00:27:17.182 EAL: No free 2048 kB hugepages reported on node 1 00:27:17.182 [2024-07-15 20:40:41.913842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:17.182 [2024-07-15 20:40:41.965862] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:17.182 Running I/O for 90 seconds... 00:27:17.182 [2024-07-15 20:40:55.098711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:54640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.098743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:54648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.098782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:54656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.098798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:54664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.098814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:54672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.098829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:54680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.098844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:54688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.098859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:53744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:53752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:53760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:53768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:53776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:53784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:53792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:53800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.098988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.098998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:54696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.099003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:54704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.182 [2024-07-15 20:40:55.100552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:53808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:53816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:53824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:53832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:53840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:53848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:53856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:53864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:53872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:53880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:17.182 [2024-07-15 20:40:55.100729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:53888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.182 [2024-07-15 20:40:55.100734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:53896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:53904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:53912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:53920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:53928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:53936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:53944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:53952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:53960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:53968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:53976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:53984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:53992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.100986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.100998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:54016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:54024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:54048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:54064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:54072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:54080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:54096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:54104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:54152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:54160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:54168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:54176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:54192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:54200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:54208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:17.183 [2024-07-15 20:40:55.101470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:54216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.183 [2024-07-15 20:40:55.101475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:54232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:54240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:54248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:54272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:54280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:54288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:54320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:54328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:54336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:54344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:54352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:54360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:54368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.101981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:54400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.101986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:54408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:54416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:54432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:54448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:54480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:54488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:54504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.184 [2024-07-15 20:40:55.102249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:54712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.184 [2024-07-15 20:40:55.102269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:54720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.184 [2024-07-15 20:40:55.102289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:54728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.184 [2024-07-15 20:40:55.102309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:54736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.184 [2024-07-15 20:40:55.102330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:17.184 [2024-07-15 20:40:55.102345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:54744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.184 [2024-07-15 20:40:55.102350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:54752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.185 [2024-07-15 20:40:55.102370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:54760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.185 [2024-07-15 20:40:55.102440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:54512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:54520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:54528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:54568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:54576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:54584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:54600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:54608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:54616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:40:55.102787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:54632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:40:55.102791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:93872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:93904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:93936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:93968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.185 [2024-07-15 20:41:07.104700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.185 [2024-07-15 20:41:07.104717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:94024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:94056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.104781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.104788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.105153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.105170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:94312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.105186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:94344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.185 [2024-07-15 20:41:07.105202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.185 [2024-07-15 20:41:07.105218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.185 [2024-07-15 20:41:07.105240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.185 [2024-07-15 20:41:07.105258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:17.185 [2024-07-15 20:41:07.105268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.186 [2024-07-15 20:41:07.105273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:17.186 [2024-07-15 20:41:07.105283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:94360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.186 [2024-07-15 20:41:07.105288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:17.186 [2024-07-15 20:41:07.105298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:94392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:17.186 [2024-07-15 20:41:07.105303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:17.186 [2024-07-15 20:41:07.105314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:17.186 [2024-07-15 20:41:07.105320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:17.186 Received shutdown signal, test time was about 25.559947 seconds 00:27:17.186 00:27:17.186 Latency(us) 00:27:17.186 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:17.186 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:17.186 Verification LBA range: start 0x0 length 0x4000 00:27:17.186 Nvme0n1 : 25.56 10888.98 42.54 0.00 0.00 11736.75 387.41 3019898.88 00:27:17.186 =================================================================================================================== 00:27:17.186 Total : 10888.98 42.54 0.00 0.00 11736.75 387.41 3019898.88 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:17.186 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:17.186 rmmod nvme_tcp 00:27:17.186 rmmod nvme_fabrics 00:27:17.186 rmmod nvme_keyring 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1475785 ']' 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1475785 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1475785 ']' 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1475785 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1475785 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1475785' 00:27:17.445 killing process with pid 1475785 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1475785 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1475785 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:17.445 20:41:09 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.990 20:41:11 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:19.990 00:27:19.990 real 0m40.194s 00:27:19.990 user 1m41.273s 00:27:19.990 sys 0m11.380s 00:27:19.991 20:41:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:19.991 20:41:11 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:19.991 ************************************ 00:27:19.991 END TEST nvmf_host_multipath_status 00:27:19.991 ************************************ 00:27:19.991 20:41:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:19.991 20:41:11 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:19.991 20:41:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:19.991 20:41:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:19.991 20:41:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:19.991 ************************************ 00:27:19.991 START TEST nvmf_discovery_remove_ifc 00:27:19.991 ************************************ 00:27:19.991 20:41:11 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:19.991 * Looking for test storage... 00:27:19.991 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:27:19.991 20:41:12 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.131 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.131 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:27:28.131 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:28.131 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:28.131 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:28.131 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:28.131 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:28.132 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:28.132 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:28.132 Found net devices under 0000:31:00.0: cvl_0_0 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:28.132 Found net devices under 0000:31:00.1: cvl_0_1 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:28.132 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.132 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.637 ms 00:27:28.132 00:27:28.132 --- 10.0.0.2 ping statistics --- 00:27:28.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.132 rtt min/avg/max/mdev = 0.637/0.637/0.637/0.000 ms 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.132 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.132 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:27:28.132 00:27:28.132 --- 10.0.0.1 ping statistics --- 00:27:28.132 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.132 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1486355 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1486355 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1486355 ']' 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.132 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.132 20:41:19 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.132 [2024-07-15 20:41:19.911050] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:27:28.133 [2024-07-15 20:41:19.911118] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.133 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.133 [2024-07-15 20:41:20.007196] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.133 [2024-07-15 20:41:20.104370] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.133 [2024-07-15 20:41:20.104435] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.133 [2024-07-15 20:41:20.104444] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.133 [2024-07-15 20:41:20.104451] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.133 [2024-07-15 20:41:20.104457] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.133 [2024-07-15 20:41:20.104490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.393 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.393 [2024-07-15 20:41:20.752367] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.393 [2024-07-15 20:41:20.760570] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:28.653 null0 00:27:28.653 [2024-07-15 20:41:20.792545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1486499 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1486499 /tmp/host.sock 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1486499 ']' 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:28.653 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.653 20:41:20 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:28.653 [2024-07-15 20:41:20.867244] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:27:28.653 [2024-07-15 20:41:20.867309] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1486499 ] 00:27:28.653 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.653 [2024-07-15 20:41:20.938079] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.653 [2024-07-15 20:41:21.012849] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:29.592 20:41:21 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.532 [2024-07-15 20:41:22.761474] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:30.532 [2024-07-15 20:41:22.761496] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:30.532 [2024-07-15 20:41:22.761510] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.532 [2024-07-15 20:41:22.889928] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:30.792 [2024-07-15 20:41:23.116979] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:30.792 [2024-07-15 20:41:23.117027] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:30.792 [2024-07-15 20:41:23.117048] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:30.792 [2024-07-15 20:41:23.117064] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:30.792 [2024-07-15 20:41:23.117084] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:30.792 [2024-07-15 20:41:23.120475] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10bf500 was disconnected and freed. delete nvme_qpair. 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.792 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:31.054 20:41:23 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.994 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.994 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.994 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.994 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.994 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:31.994 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.994 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.255 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:32.255 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.255 20:41:24 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:33.198 20:41:25 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.140 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:34.401 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:34.401 20:41:26 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:35.340 20:41:27 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:36.276 [2024-07-15 20:41:28.557742] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:36.276 [2024-07-15 20:41:28.557781] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.276 [2024-07-15 20:41:28.557793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.276 [2024-07-15 20:41:28.557802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.276 [2024-07-15 20:41:28.557809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.276 [2024-07-15 20:41:28.557817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.276 [2024-07-15 20:41:28.557824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.276 [2024-07-15 20:41:28.557832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.276 [2024-07-15 20:41:28.557839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.276 [2024-07-15 20:41:28.557847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:36.276 [2024-07-15 20:41:28.557854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:36.276 [2024-07-15 20:41:28.557861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10860a0 is same with the state(5) to be set 00:27:36.276 [2024-07-15 20:41:28.567761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10860a0 (9): Bad file descriptor 00:27:36.276 [2024-07-15 20:41:28.577803] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:36.276 20:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.276 20:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.276 20:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.276 20:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.276 20:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:36.276 20:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.276 20:41:28 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.658 [2024-07-15 20:41:29.617271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:37.658 [2024-07-15 20:41:29.617310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x10860a0 with addr=10.0.0.2, port=4420 00:27:37.658 [2024-07-15 20:41:29.617323] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10860a0 is same with the state(5) to be set 00:27:37.658 [2024-07-15 20:41:29.617345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x10860a0 (9): Bad file descriptor 00:27:37.658 [2024-07-15 20:41:29.617709] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:27:37.658 [2024-07-15 20:41:29.617733] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:37.658 [2024-07-15 20:41:29.617741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:37.658 [2024-07-15 20:41:29.617751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:37.658 [2024-07-15 20:41:29.617766] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:37.658 [2024-07-15 20:41:29.617774] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:37.659 20:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:37.659 20:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:37.659 20:41:29 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.594 [2024-07-15 20:41:30.620149] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:38.594 [2024-07-15 20:41:30.620179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:38.594 [2024-07-15 20:41:30.620187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:38.594 [2024-07-15 20:41:30.620195] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:27:38.594 [2024-07-15 20:41:30.620209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:38.594 [2024-07-15 20:41:30.620234] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:38.594 [2024-07-15 20:41:30.620260] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.594 [2024-07-15 20:41:30.620271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.594 [2024-07-15 20:41:30.620283] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.594 [2024-07-15 20:41:30.620290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.594 [2024-07-15 20:41:30.620298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.594 [2024-07-15 20:41:30.620306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.594 [2024-07-15 20:41:30.620314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.594 [2024-07-15 20:41:30.620321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.594 [2024-07-15 20:41:30.620329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:38.594 [2024-07-15 20:41:30.620342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:38.594 [2024-07-15 20:41:30.620350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:38.594 [2024-07-15 20:41:30.620932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1085520 (9): Bad file descriptor 00:27:38.594 [2024-07-15 20:41:30.621945] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:38.594 [2024-07-15 20:41:30.621957] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:38.594 20:41:30 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:39.536 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:39.796 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:39.796 20:41:31 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.367 [2024-07-15 20:41:32.677390] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:40.367 [2024-07-15 20:41:32.677413] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:40.367 [2024-07-15 20:41:32.677426] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:40.629 [2024-07-15 20:41:32.806857] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:40.629 [2024-07-15 20:41:32.866590] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:40.629 [2024-07-15 20:41:32.866626] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:40.629 [2024-07-15 20:41:32.866644] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:40.629 [2024-07-15 20:41:32.866658] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:40.629 [2024-07-15 20:41:32.866666] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:40.629 [2024-07-15 20:41:32.874500] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x10c8cb0 was disconnected and freed. delete nvme_qpair. 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1486499 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1486499 ']' 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1486499 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.629 20:41:32 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486499 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486499' 00:27:40.890 killing process with pid 1486499 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1486499 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1486499 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:40.890 rmmod nvme_tcp 00:27:40.890 rmmod nvme_fabrics 00:27:40.890 rmmod nvme_keyring 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1486355 ']' 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1486355 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1486355 ']' 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1486355 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:40.890 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1486355 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1486355' 00:27:41.150 killing process with pid 1486355 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1486355 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1486355 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:41.150 20:41:33 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.692 20:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:43.692 00:27:43.692 real 0m23.540s 00:27:43.692 user 0m27.344s 00:27:43.692 sys 0m7.166s 00:27:43.692 20:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:43.692 20:41:35 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.692 ************************************ 00:27:43.692 END TEST nvmf_discovery_remove_ifc 00:27:43.692 ************************************ 00:27:43.692 20:41:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:43.692 20:41:35 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:43.692 20:41:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:43.692 20:41:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:43.692 20:41:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:43.692 ************************************ 00:27:43.692 START TEST nvmf_identify_kernel_target 00:27:43.692 ************************************ 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:43.692 * Looking for test storage... 00:27:43.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:43.692 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:27:43.693 20:41:35 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:27:51.834 Found 0000:31:00.0 (0x8086 - 0x159b) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:27:51.834 Found 0000:31:00.1 (0x8086 - 0x159b) 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:51.834 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:27:51.835 Found net devices under 0000:31:00.0: cvl_0_0 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:27:51.835 Found net devices under 0000:31:00.1: cvl_0_1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:51.835 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.835 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.513 ms 00:27:51.835 00:27:51.835 --- 10.0.0.2 ping statistics --- 00:27:51.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.835 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:51.835 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.835 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:27:51.835 00:27:51.835 --- 10.0.0.1 ping statistics --- 00:27:51.835 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.835 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:51.835 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.836 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.836 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:51.836 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:27:51.836 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:51.836 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:51.836 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:51.836 20:41:43 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:55.204 Waiting for block devices as requested 00:27:55.204 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:55.465 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:55.465 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:55.465 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:55.725 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:55.725 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:55.725 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:55.986 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:55.986 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:27:55.986 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:27:56.246 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:27:56.246 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:27:56.246 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:27:56.246 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:27:56.506 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:27:56.506 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:27:56.506 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:56.506 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:56.768 No valid GPT data, bailing 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:56.768 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:27:56.769 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:56.769 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:27:56.769 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:27:56.769 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:27:56.769 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:56.769 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:27:56.769 00:27:56.769 Discovery Log Number of Records 2, Generation counter 2 00:27:56.769 =====Discovery Log Entry 0====== 00:27:56.769 trtype: tcp 00:27:56.769 adrfam: ipv4 00:27:56.769 subtype: current discovery subsystem 00:27:56.769 treq: not specified, sq flow control disable supported 00:27:56.769 portid: 1 00:27:56.769 trsvcid: 4420 00:27:56.769 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:56.769 traddr: 10.0.0.1 00:27:56.769 eflags: none 00:27:56.769 sectype: none 00:27:56.769 =====Discovery Log Entry 1====== 00:27:56.769 trtype: tcp 00:27:56.769 adrfam: ipv4 00:27:56.769 subtype: nvme subsystem 00:27:56.769 treq: not specified, sq flow control disable supported 00:27:56.769 portid: 1 00:27:56.769 trsvcid: 4420 00:27:56.769 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:56.769 traddr: 10.0.0.1 00:27:56.769 eflags: none 00:27:56.769 sectype: none 00:27:56.769 20:41:48 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:56.769 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:56.769 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.769 ===================================================== 00:27:56.769 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:56.769 ===================================================== 00:27:56.769 Controller Capabilities/Features 00:27:56.769 ================================ 00:27:56.769 Vendor ID: 0000 00:27:56.769 Subsystem Vendor ID: 0000 00:27:56.769 Serial Number: 93dfebee80ff2e52b251 00:27:56.769 Model Number: Linux 00:27:56.769 Firmware Version: 6.7.0-68 00:27:56.769 Recommended Arb Burst: 0 00:27:56.769 IEEE OUI Identifier: 00 00 00 00:27:56.769 Multi-path I/O 00:27:56.769 May have multiple subsystem ports: No 00:27:56.769 May have multiple controllers: No 00:27:56.769 Associated with SR-IOV VF: No 00:27:56.769 Max Data Transfer Size: Unlimited 00:27:56.769 Max Number of Namespaces: 0 00:27:56.769 Max Number of I/O Queues: 1024 00:27:56.769 NVMe Specification Version (VS): 1.3 00:27:56.769 NVMe Specification Version (Identify): 1.3 00:27:56.769 Maximum Queue Entries: 1024 00:27:56.769 Contiguous Queues Required: No 00:27:56.769 Arbitration Mechanisms Supported 00:27:56.769 Weighted Round Robin: Not Supported 00:27:56.769 Vendor Specific: Not Supported 00:27:56.769 Reset Timeout: 7500 ms 00:27:56.769 Doorbell Stride: 4 bytes 00:27:56.769 NVM Subsystem Reset: Not Supported 00:27:56.769 Command Sets Supported 00:27:56.769 NVM Command Set: Supported 00:27:56.769 Boot Partition: Not Supported 00:27:56.769 Memory Page Size Minimum: 4096 bytes 00:27:56.769 Memory Page Size Maximum: 4096 bytes 00:27:56.769 Persistent Memory Region: Not Supported 00:27:56.769 Optional Asynchronous Events Supported 00:27:56.769 Namespace Attribute Notices: Not Supported 00:27:56.769 Firmware Activation Notices: Not Supported 00:27:56.769 ANA Change Notices: Not Supported 00:27:56.769 PLE Aggregate Log Change Notices: Not Supported 00:27:56.769 LBA Status Info Alert Notices: Not Supported 00:27:56.769 EGE Aggregate Log Change Notices: Not Supported 00:27:56.769 Normal NVM Subsystem Shutdown event: Not Supported 00:27:56.769 Zone Descriptor Change Notices: Not Supported 00:27:56.769 Discovery Log Change Notices: Supported 00:27:56.769 Controller Attributes 00:27:56.769 128-bit Host Identifier: Not Supported 00:27:56.769 Non-Operational Permissive Mode: Not Supported 00:27:56.769 NVM Sets: Not Supported 00:27:56.769 Read Recovery Levels: Not Supported 00:27:56.769 Endurance Groups: Not Supported 00:27:56.769 Predictable Latency Mode: Not Supported 00:27:56.769 Traffic Based Keep ALive: Not Supported 00:27:56.769 Namespace Granularity: Not Supported 00:27:56.769 SQ Associations: Not Supported 00:27:56.769 UUID List: Not Supported 00:27:56.769 Multi-Domain Subsystem: Not Supported 00:27:56.769 Fixed Capacity Management: Not Supported 00:27:56.769 Variable Capacity Management: Not Supported 00:27:56.769 Delete Endurance Group: Not Supported 00:27:56.769 Delete NVM Set: Not Supported 00:27:56.769 Extended LBA Formats Supported: Not Supported 00:27:56.769 Flexible Data Placement Supported: Not Supported 00:27:56.769 00:27:56.769 Controller Memory Buffer Support 00:27:56.769 ================================ 00:27:56.769 Supported: No 00:27:56.769 00:27:56.769 Persistent Memory Region Support 00:27:56.769 ================================ 00:27:56.769 Supported: No 00:27:56.769 00:27:56.769 Admin Command Set Attributes 00:27:56.769 ============================ 00:27:56.769 Security Send/Receive: Not Supported 00:27:56.769 Format NVM: Not Supported 00:27:56.769 Firmware Activate/Download: Not Supported 00:27:56.769 Namespace Management: Not Supported 00:27:56.769 Device Self-Test: Not Supported 00:27:56.769 Directives: Not Supported 00:27:56.769 NVMe-MI: Not Supported 00:27:56.769 Virtualization Management: Not Supported 00:27:56.769 Doorbell Buffer Config: Not Supported 00:27:56.769 Get LBA Status Capability: Not Supported 00:27:56.769 Command & Feature Lockdown Capability: Not Supported 00:27:56.769 Abort Command Limit: 1 00:27:56.769 Async Event Request Limit: 1 00:27:56.769 Number of Firmware Slots: N/A 00:27:56.769 Firmware Slot 1 Read-Only: N/A 00:27:56.769 Firmware Activation Without Reset: N/A 00:27:56.769 Multiple Update Detection Support: N/A 00:27:56.769 Firmware Update Granularity: No Information Provided 00:27:56.769 Per-Namespace SMART Log: No 00:27:56.769 Asymmetric Namespace Access Log Page: Not Supported 00:27:56.769 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:56.769 Command Effects Log Page: Not Supported 00:27:56.769 Get Log Page Extended Data: Supported 00:27:56.769 Telemetry Log Pages: Not Supported 00:27:56.769 Persistent Event Log Pages: Not Supported 00:27:56.769 Supported Log Pages Log Page: May Support 00:27:56.769 Commands Supported & Effects Log Page: Not Supported 00:27:56.769 Feature Identifiers & Effects Log Page:May Support 00:27:56.769 NVMe-MI Commands & Effects Log Page: May Support 00:27:56.769 Data Area 4 for Telemetry Log: Not Supported 00:27:56.769 Error Log Page Entries Supported: 1 00:27:56.769 Keep Alive: Not Supported 00:27:56.769 00:27:56.769 NVM Command Set Attributes 00:27:56.769 ========================== 00:27:56.769 Submission Queue Entry Size 00:27:56.769 Max: 1 00:27:56.769 Min: 1 00:27:56.769 Completion Queue Entry Size 00:27:56.769 Max: 1 00:27:56.769 Min: 1 00:27:56.769 Number of Namespaces: 0 00:27:56.769 Compare Command: Not Supported 00:27:56.769 Write Uncorrectable Command: Not Supported 00:27:56.769 Dataset Management Command: Not Supported 00:27:56.769 Write Zeroes Command: Not Supported 00:27:56.769 Set Features Save Field: Not Supported 00:27:56.769 Reservations: Not Supported 00:27:56.769 Timestamp: Not Supported 00:27:56.769 Copy: Not Supported 00:27:56.769 Volatile Write Cache: Not Present 00:27:56.769 Atomic Write Unit (Normal): 1 00:27:56.769 Atomic Write Unit (PFail): 1 00:27:56.769 Atomic Compare & Write Unit: 1 00:27:56.769 Fused Compare & Write: Not Supported 00:27:56.769 Scatter-Gather List 00:27:56.769 SGL Command Set: Supported 00:27:56.769 SGL Keyed: Not Supported 00:27:56.769 SGL Bit Bucket Descriptor: Not Supported 00:27:56.769 SGL Metadata Pointer: Not Supported 00:27:56.769 Oversized SGL: Not Supported 00:27:56.769 SGL Metadata Address: Not Supported 00:27:56.769 SGL Offset: Supported 00:27:56.769 Transport SGL Data Block: Not Supported 00:27:56.769 Replay Protected Memory Block: Not Supported 00:27:56.769 00:27:56.769 Firmware Slot Information 00:27:56.769 ========================= 00:27:56.769 Active slot: 0 00:27:56.769 00:27:56.769 00:27:56.769 Error Log 00:27:56.769 ========= 00:27:56.769 00:27:56.769 Active Namespaces 00:27:56.769 ================= 00:27:56.769 Discovery Log Page 00:27:56.769 ================== 00:27:56.769 Generation Counter: 2 00:27:56.769 Number of Records: 2 00:27:56.769 Record Format: 0 00:27:56.769 00:27:56.769 Discovery Log Entry 0 00:27:56.769 ---------------------- 00:27:56.769 Transport Type: 3 (TCP) 00:27:56.769 Address Family: 1 (IPv4) 00:27:56.769 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:56.769 Entry Flags: 00:27:56.769 Duplicate Returned Information: 0 00:27:56.769 Explicit Persistent Connection Support for Discovery: 0 00:27:56.769 Transport Requirements: 00:27:56.769 Secure Channel: Not Specified 00:27:56.769 Port ID: 1 (0x0001) 00:27:56.770 Controller ID: 65535 (0xffff) 00:27:56.770 Admin Max SQ Size: 32 00:27:56.770 Transport Service Identifier: 4420 00:27:56.770 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:56.770 Transport Address: 10.0.0.1 00:27:56.770 Discovery Log Entry 1 00:27:56.770 ---------------------- 00:27:56.770 Transport Type: 3 (TCP) 00:27:56.770 Address Family: 1 (IPv4) 00:27:56.770 Subsystem Type: 2 (NVM Subsystem) 00:27:56.770 Entry Flags: 00:27:56.770 Duplicate Returned Information: 0 00:27:56.770 Explicit Persistent Connection Support for Discovery: 0 00:27:56.770 Transport Requirements: 00:27:56.770 Secure Channel: Not Specified 00:27:56.770 Port ID: 1 (0x0001) 00:27:56.770 Controller ID: 65535 (0xffff) 00:27:56.770 Admin Max SQ Size: 32 00:27:56.770 Transport Service Identifier: 4420 00:27:56.770 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:56.770 Transport Address: 10.0.0.1 00:27:56.770 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:56.770 EAL: No free 2048 kB hugepages reported on node 1 00:27:56.770 get_feature(0x01) failed 00:27:56.770 get_feature(0x02) failed 00:27:56.770 get_feature(0x04) failed 00:27:56.770 ===================================================== 00:27:56.770 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:56.770 ===================================================== 00:27:56.770 Controller Capabilities/Features 00:27:56.770 ================================ 00:27:56.770 Vendor ID: 0000 00:27:56.770 Subsystem Vendor ID: 0000 00:27:56.770 Serial Number: ecc60d6847ff0594eca5 00:27:56.770 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:56.770 Firmware Version: 6.7.0-68 00:27:56.770 Recommended Arb Burst: 6 00:27:56.770 IEEE OUI Identifier: 00 00 00 00:27:56.770 Multi-path I/O 00:27:56.770 May have multiple subsystem ports: Yes 00:27:56.770 May have multiple controllers: Yes 00:27:56.770 Associated with SR-IOV VF: No 00:27:56.770 Max Data Transfer Size: Unlimited 00:27:56.770 Max Number of Namespaces: 1024 00:27:56.770 Max Number of I/O Queues: 128 00:27:56.770 NVMe Specification Version (VS): 1.3 00:27:56.770 NVMe Specification Version (Identify): 1.3 00:27:56.770 Maximum Queue Entries: 1024 00:27:56.770 Contiguous Queues Required: No 00:27:56.770 Arbitration Mechanisms Supported 00:27:56.770 Weighted Round Robin: Not Supported 00:27:56.770 Vendor Specific: Not Supported 00:27:56.770 Reset Timeout: 7500 ms 00:27:56.770 Doorbell Stride: 4 bytes 00:27:56.770 NVM Subsystem Reset: Not Supported 00:27:56.770 Command Sets Supported 00:27:56.770 NVM Command Set: Supported 00:27:56.770 Boot Partition: Not Supported 00:27:56.770 Memory Page Size Minimum: 4096 bytes 00:27:56.770 Memory Page Size Maximum: 4096 bytes 00:27:56.770 Persistent Memory Region: Not Supported 00:27:56.770 Optional Asynchronous Events Supported 00:27:56.770 Namespace Attribute Notices: Supported 00:27:56.770 Firmware Activation Notices: Not Supported 00:27:56.770 ANA Change Notices: Supported 00:27:56.770 PLE Aggregate Log Change Notices: Not Supported 00:27:56.770 LBA Status Info Alert Notices: Not Supported 00:27:56.770 EGE Aggregate Log Change Notices: Not Supported 00:27:56.770 Normal NVM Subsystem Shutdown event: Not Supported 00:27:56.770 Zone Descriptor Change Notices: Not Supported 00:27:56.770 Discovery Log Change Notices: Not Supported 00:27:56.770 Controller Attributes 00:27:56.770 128-bit Host Identifier: Supported 00:27:56.770 Non-Operational Permissive Mode: Not Supported 00:27:56.770 NVM Sets: Not Supported 00:27:56.770 Read Recovery Levels: Not Supported 00:27:56.770 Endurance Groups: Not Supported 00:27:56.770 Predictable Latency Mode: Not Supported 00:27:56.770 Traffic Based Keep ALive: Supported 00:27:56.770 Namespace Granularity: Not Supported 00:27:56.770 SQ Associations: Not Supported 00:27:56.770 UUID List: Not Supported 00:27:56.770 Multi-Domain Subsystem: Not Supported 00:27:56.770 Fixed Capacity Management: Not Supported 00:27:56.770 Variable Capacity Management: Not Supported 00:27:56.770 Delete Endurance Group: Not Supported 00:27:56.770 Delete NVM Set: Not Supported 00:27:56.770 Extended LBA Formats Supported: Not Supported 00:27:56.770 Flexible Data Placement Supported: Not Supported 00:27:56.770 00:27:56.770 Controller Memory Buffer Support 00:27:56.770 ================================ 00:27:56.770 Supported: No 00:27:56.770 00:27:56.770 Persistent Memory Region Support 00:27:56.770 ================================ 00:27:56.770 Supported: No 00:27:56.770 00:27:56.770 Admin Command Set Attributes 00:27:56.770 ============================ 00:27:56.770 Security Send/Receive: Not Supported 00:27:56.770 Format NVM: Not Supported 00:27:56.770 Firmware Activate/Download: Not Supported 00:27:56.770 Namespace Management: Not Supported 00:27:56.770 Device Self-Test: Not Supported 00:27:56.770 Directives: Not Supported 00:27:56.770 NVMe-MI: Not Supported 00:27:56.770 Virtualization Management: Not Supported 00:27:56.770 Doorbell Buffer Config: Not Supported 00:27:56.770 Get LBA Status Capability: Not Supported 00:27:56.770 Command & Feature Lockdown Capability: Not Supported 00:27:56.770 Abort Command Limit: 4 00:27:56.770 Async Event Request Limit: 4 00:27:56.770 Number of Firmware Slots: N/A 00:27:56.770 Firmware Slot 1 Read-Only: N/A 00:27:56.770 Firmware Activation Without Reset: N/A 00:27:56.770 Multiple Update Detection Support: N/A 00:27:56.770 Firmware Update Granularity: No Information Provided 00:27:56.770 Per-Namespace SMART Log: Yes 00:27:56.770 Asymmetric Namespace Access Log Page: Supported 00:27:56.770 ANA Transition Time : 10 sec 00:27:56.770 00:27:56.770 Asymmetric Namespace Access Capabilities 00:27:56.770 ANA Optimized State : Supported 00:27:56.770 ANA Non-Optimized State : Supported 00:27:56.770 ANA Inaccessible State : Supported 00:27:56.770 ANA Persistent Loss State : Supported 00:27:56.770 ANA Change State : Supported 00:27:56.770 ANAGRPID is not changed : No 00:27:56.770 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:56.770 00:27:56.770 ANA Group Identifier Maximum : 128 00:27:56.770 Number of ANA Group Identifiers : 128 00:27:56.770 Max Number of Allowed Namespaces : 1024 00:27:56.770 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:56.770 Command Effects Log Page: Supported 00:27:56.770 Get Log Page Extended Data: Supported 00:27:56.770 Telemetry Log Pages: Not Supported 00:27:56.770 Persistent Event Log Pages: Not Supported 00:27:56.770 Supported Log Pages Log Page: May Support 00:27:56.770 Commands Supported & Effects Log Page: Not Supported 00:27:56.770 Feature Identifiers & Effects Log Page:May Support 00:27:56.770 NVMe-MI Commands & Effects Log Page: May Support 00:27:56.770 Data Area 4 for Telemetry Log: Not Supported 00:27:56.770 Error Log Page Entries Supported: 128 00:27:56.770 Keep Alive: Supported 00:27:56.770 Keep Alive Granularity: 1000 ms 00:27:56.770 00:27:56.770 NVM Command Set Attributes 00:27:56.770 ========================== 00:27:56.770 Submission Queue Entry Size 00:27:56.770 Max: 64 00:27:56.770 Min: 64 00:27:56.770 Completion Queue Entry Size 00:27:56.770 Max: 16 00:27:56.770 Min: 16 00:27:56.770 Number of Namespaces: 1024 00:27:56.770 Compare Command: Not Supported 00:27:56.770 Write Uncorrectable Command: Not Supported 00:27:56.770 Dataset Management Command: Supported 00:27:56.770 Write Zeroes Command: Supported 00:27:56.770 Set Features Save Field: Not Supported 00:27:56.770 Reservations: Not Supported 00:27:56.770 Timestamp: Not Supported 00:27:56.770 Copy: Not Supported 00:27:56.770 Volatile Write Cache: Present 00:27:56.770 Atomic Write Unit (Normal): 1 00:27:56.770 Atomic Write Unit (PFail): 1 00:27:56.770 Atomic Compare & Write Unit: 1 00:27:56.770 Fused Compare & Write: Not Supported 00:27:56.770 Scatter-Gather List 00:27:56.770 SGL Command Set: Supported 00:27:56.770 SGL Keyed: Not Supported 00:27:56.770 SGL Bit Bucket Descriptor: Not Supported 00:27:56.770 SGL Metadata Pointer: Not Supported 00:27:56.770 Oversized SGL: Not Supported 00:27:56.770 SGL Metadata Address: Not Supported 00:27:56.770 SGL Offset: Supported 00:27:56.770 Transport SGL Data Block: Not Supported 00:27:56.770 Replay Protected Memory Block: Not Supported 00:27:56.770 00:27:56.770 Firmware Slot Information 00:27:56.770 ========================= 00:27:56.770 Active slot: 0 00:27:56.770 00:27:56.770 Asymmetric Namespace Access 00:27:56.770 =========================== 00:27:56.770 Change Count : 0 00:27:56.770 Number of ANA Group Descriptors : 1 00:27:56.770 ANA Group Descriptor : 0 00:27:56.770 ANA Group ID : 1 00:27:56.770 Number of NSID Values : 1 00:27:56.770 Change Count : 0 00:27:56.770 ANA State : 1 00:27:56.770 Namespace Identifier : 1 00:27:56.770 00:27:56.770 Commands Supported and Effects 00:27:56.770 ============================== 00:27:56.770 Admin Commands 00:27:56.770 -------------- 00:27:56.770 Get Log Page (02h): Supported 00:27:56.770 Identify (06h): Supported 00:27:56.771 Abort (08h): Supported 00:27:56.771 Set Features (09h): Supported 00:27:56.771 Get Features (0Ah): Supported 00:27:56.771 Asynchronous Event Request (0Ch): Supported 00:27:56.771 Keep Alive (18h): Supported 00:27:56.771 I/O Commands 00:27:56.771 ------------ 00:27:56.771 Flush (00h): Supported 00:27:56.771 Write (01h): Supported LBA-Change 00:27:56.771 Read (02h): Supported 00:27:56.771 Write Zeroes (08h): Supported LBA-Change 00:27:56.771 Dataset Management (09h): Supported 00:27:56.771 00:27:56.771 Error Log 00:27:56.771 ========= 00:27:56.771 Entry: 0 00:27:56.771 Error Count: 0x3 00:27:56.771 Submission Queue Id: 0x0 00:27:56.771 Command Id: 0x5 00:27:56.771 Phase Bit: 0 00:27:56.771 Status Code: 0x2 00:27:56.771 Status Code Type: 0x0 00:27:56.771 Do Not Retry: 1 00:27:56.771 Error Location: 0x28 00:27:56.771 LBA: 0x0 00:27:56.771 Namespace: 0x0 00:27:56.771 Vendor Log Page: 0x0 00:27:56.771 ----------- 00:27:56.771 Entry: 1 00:27:56.771 Error Count: 0x2 00:27:56.771 Submission Queue Id: 0x0 00:27:56.771 Command Id: 0x5 00:27:56.771 Phase Bit: 0 00:27:56.771 Status Code: 0x2 00:27:56.771 Status Code Type: 0x0 00:27:56.771 Do Not Retry: 1 00:27:56.771 Error Location: 0x28 00:27:56.771 LBA: 0x0 00:27:56.771 Namespace: 0x0 00:27:56.771 Vendor Log Page: 0x0 00:27:56.771 ----------- 00:27:56.771 Entry: 2 00:27:56.771 Error Count: 0x1 00:27:56.771 Submission Queue Id: 0x0 00:27:56.771 Command Id: 0x4 00:27:56.771 Phase Bit: 0 00:27:56.771 Status Code: 0x2 00:27:56.771 Status Code Type: 0x0 00:27:56.771 Do Not Retry: 1 00:27:56.771 Error Location: 0x28 00:27:56.771 LBA: 0x0 00:27:56.771 Namespace: 0x0 00:27:56.771 Vendor Log Page: 0x0 00:27:56.771 00:27:56.771 Number of Queues 00:27:56.771 ================ 00:27:56.771 Number of I/O Submission Queues: 128 00:27:56.771 Number of I/O Completion Queues: 128 00:27:56.771 00:27:56.771 ZNS Specific Controller Data 00:27:56.771 ============================ 00:27:56.771 Zone Append Size Limit: 0 00:27:56.771 00:27:56.771 00:27:56.771 Active Namespaces 00:27:56.771 ================= 00:27:56.771 get_feature(0x05) failed 00:27:56.771 Namespace ID:1 00:27:56.771 Command Set Identifier: NVM (00h) 00:27:56.771 Deallocate: Supported 00:27:56.771 Deallocated/Unwritten Error: Not Supported 00:27:56.771 Deallocated Read Value: Unknown 00:27:56.771 Deallocate in Write Zeroes: Not Supported 00:27:56.771 Deallocated Guard Field: 0xFFFF 00:27:56.771 Flush: Supported 00:27:56.771 Reservation: Not Supported 00:27:56.771 Namespace Sharing Capabilities: Multiple Controllers 00:27:56.771 Size (in LBAs): 3750748848 (1788GiB) 00:27:56.771 Capacity (in LBAs): 3750748848 (1788GiB) 00:27:56.771 Utilization (in LBAs): 3750748848 (1788GiB) 00:27:56.771 UUID: 30a89a53-a76c-41ff-863e-a3747333c8ef 00:27:56.771 Thin Provisioning: Not Supported 00:27:56.771 Per-NS Atomic Units: Yes 00:27:56.771 Atomic Write Unit (Normal): 8 00:27:56.771 Atomic Write Unit (PFail): 8 00:27:56.771 Preferred Write Granularity: 8 00:27:56.771 Atomic Compare & Write Unit: 8 00:27:56.771 Atomic Boundary Size (Normal): 0 00:27:56.771 Atomic Boundary Size (PFail): 0 00:27:56.771 Atomic Boundary Offset: 0 00:27:56.771 NGUID/EUI64 Never Reused: No 00:27:56.771 ANA group ID: 1 00:27:56.771 Namespace Write Protected: No 00:27:56.771 Number of LBA Formats: 1 00:27:56.771 Current LBA Format: LBA Format #00 00:27:56.771 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:56.771 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:56.771 rmmod nvme_tcp 00:27:56.771 rmmod nvme_fabrics 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:56.771 20:41:49 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:59.317 20:41:51 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:02.619 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:02.619 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:02.619 00:28:02.619 real 0m19.023s 00:28:02.619 user 0m4.872s 00:28:02.619 sys 0m11.113s 00:28:02.619 20:41:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:02.619 20:41:54 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:02.619 ************************************ 00:28:02.619 END TEST nvmf_identify_kernel_target 00:28:02.619 ************************************ 00:28:02.619 20:41:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:02.619 20:41:54 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:02.619 20:41:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:02.619 20:41:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:02.619 20:41:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:02.619 ************************************ 00:28:02.619 START TEST nvmf_auth_host 00:28:02.619 ************************************ 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:02.619 * Looking for test storage... 00:28:02.619 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:28:02.619 20:41:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:28:10.749 Found 0000:31:00.0 (0x8086 - 0x159b) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:28:10.749 Found 0000:31:00.1 (0x8086 - 0x159b) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:28:10.749 Found net devices under 0000:31:00.0: cvl_0_0 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:28:10.749 Found net devices under 0000:31:00.1: cvl_0_1 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:10.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:10.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.619 ms 00:28:10.749 00:28:10.749 --- 10.0.0.2 ping statistics --- 00:28:10.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.749 rtt min/avg/max/mdev = 0.619/0.619/0.619/0.000 ms 00:28:10.749 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:10.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:10.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.249 ms 00:28:10.749 00:28:10.749 --- 10.0.0.1 ping statistics --- 00:28:10.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:10.750 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1501915 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1501915 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1501915 ']' 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:10.750 20:42:02 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.320 20:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:11.320 20:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:11.320 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:11.320 20:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:11.320 20:42:03 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f60fbf037b2dfd9c762ba16310710f6e 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nvk 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f60fbf037b2dfd9c762ba16310710f6e 0 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f60fbf037b2dfd9c762ba16310710f6e 0 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f60fbf037b2dfd9c762ba16310710f6e 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nvk 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nvk 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.nvk 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b0ac66aed6e4ef521afa8139179288263b11fa07a3becfb6efeaca9b2f7aeccf 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.y2M 00:28:11.580 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b0ac66aed6e4ef521afa8139179288263b11fa07a3becfb6efeaca9b2f7aeccf 3 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b0ac66aed6e4ef521afa8139179288263b11fa07a3becfb6efeaca9b2f7aeccf 3 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b0ac66aed6e4ef521afa8139179288263b11fa07a3becfb6efeaca9b2f7aeccf 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.y2M 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.y2M 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.y2M 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=736d022d0d9c2119c967fc76a28c1e866738a4aab41e7add 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.fH6 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 736d022d0d9c2119c967fc76a28c1e866738a4aab41e7add 0 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 736d022d0d9c2119c967fc76a28c1e866738a4aab41e7add 0 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=736d022d0d9c2119c967fc76a28c1e866738a4aab41e7add 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.fH6 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.fH6 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.fH6 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=682686872d257d2163853bd0523932f8f2c521e9e90edb72 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.XeM 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 682686872d257d2163853bd0523932f8f2c521e9e90edb72 2 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 682686872d257d2163853bd0523932f8f2c521e9e90edb72 2 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=682686872d257d2163853bd0523932f8f2c521e9e90edb72 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:11.581 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.XeM 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.XeM 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.XeM 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2ea350f754e27dac40e10d9609f9e5c4 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.7Gx 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2ea350f754e27dac40e10d9609f9e5c4 1 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2ea350f754e27dac40e10d9609f9e5c4 1 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2ea350f754e27dac40e10d9609f9e5c4 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:11.842 20:42:03 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.7Gx 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.7Gx 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.7Gx 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.842 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2b05e743a5bf43735001b79b687a0243 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.1Fo 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2b05e743a5bf43735001b79b687a0243 1 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2b05e743a5bf43735001b79b687a0243 1 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2b05e743a5bf43735001b79b687a0243 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.1Fo 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.1Fo 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.1Fo 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=dd8ab5a678df610787d79e45fd2d48473f7b73565e1086bc 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.OGk 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key dd8ab5a678df610787d79e45fd2d48473f7b73565e1086bc 2 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 dd8ab5a678df610787d79e45fd2d48473f7b73565e1086bc 2 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=dd8ab5a678df610787d79e45fd2d48473f7b73565e1086bc 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.OGk 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.OGk 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.OGk 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=e8894bdcf817bf37a605713eb12eeb4b 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.kN6 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key e8894bdcf817bf37a605713eb12eeb4b 0 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 e8894bdcf817bf37a605713eb12eeb4b 0 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=e8894bdcf817bf37a605713eb12eeb4b 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:11.843 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.kN6 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.kN6 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kN6 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=bab5dcc506237b5b85b49d1675c9c5a90bebba8a6ef63e265c5089f0dde0b680 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.lc7 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key bab5dcc506237b5b85b49d1675c9c5a90bebba8a6ef63e265c5089f0dde0b680 3 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 bab5dcc506237b5b85b49d1675c9c5a90bebba8a6ef63e265c5089f0dde0b680 3 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=bab5dcc506237b5b85b49d1675c9c5a90bebba8a6ef63e265c5089f0dde0b680 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.lc7 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.lc7 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.lc7 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1501915 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1501915 ']' 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:12.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.nvk 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.y2M ]] 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.y2M 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.104 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.fH6 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.XeM ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.XeM 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.7Gx 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.1Fo ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.1Fo 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.OGk 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kN6 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kN6 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.366 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.lc7 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:12.367 20:42:04 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:16.570 Waiting for block devices as requested 00:28:16.570 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:16.570 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:28:16.831 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:28:16.831 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:28:17.092 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:28:17.092 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:28:17.092 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:28:17.092 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:28:17.353 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:28:17.353 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:28:17.926 20:42:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:18.187 No valid GPT data, bailing 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:28:18.187 00:28:18.187 Discovery Log Number of Records 2, Generation counter 2 00:28:18.187 =====Discovery Log Entry 0====== 00:28:18.187 trtype: tcp 00:28:18.187 adrfam: ipv4 00:28:18.187 subtype: current discovery subsystem 00:28:18.187 treq: not specified, sq flow control disable supported 00:28:18.187 portid: 1 00:28:18.187 trsvcid: 4420 00:28:18.187 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:18.187 traddr: 10.0.0.1 00:28:18.187 eflags: none 00:28:18.187 sectype: none 00:28:18.187 =====Discovery Log Entry 1====== 00:28:18.187 trtype: tcp 00:28:18.187 adrfam: ipv4 00:28:18.187 subtype: nvme subsystem 00:28:18.187 treq: not specified, sq flow control disable supported 00:28:18.187 portid: 1 00:28:18.187 trsvcid: 4420 00:28:18.187 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:18.187 traddr: 10.0.0.1 00:28:18.187 eflags: none 00:28:18.187 sectype: none 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.187 nvme0n1 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.187 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.448 nvme0n1 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.448 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.709 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.710 nvme0n1 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.710 20:42:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.710 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.972 nvme0n1 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:18.972 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.233 nvme0n1 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.233 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.234 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.234 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.234 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.494 nvme0n1 00:28:19.494 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.494 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.494 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.494 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.495 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.756 nvme0n1 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:19.756 20:42:11 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.017 nvme0n1 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.017 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.018 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.279 nvme0n1 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.279 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.540 nvme0n1 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.540 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.801 nvme0n1 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.801 20:42:12 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.801 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.073 nvme0n1 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.073 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.336 nvme0n1 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:21.336 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.597 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.859 nvme0n1 00:28:21.859 20:42:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.859 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.859 20:42:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:21.859 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.121 nvme0n1 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.121 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.382 nvme0n1 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.382 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.644 20:42:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.905 nvme0n1 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.905 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.166 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.425 nvme0n1 00:28:23.425 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.425 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.425 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.425 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.425 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.425 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.687 20:42:15 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.948 nvme0n1 00:28:23.948 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.948 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.948 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.948 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.948 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.948 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.209 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.210 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.470 nvme0n1 00:28:24.470 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.470 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.470 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.470 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.470 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.470 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.730 20:42:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 nvme0n1 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:24.991 20:42:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.932 nvme0n1 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:25.932 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:25.933 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.874 nvme0n1 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:26.874 20:42:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.447 nvme0n1 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:27.447 20:42:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.390 nvme0n1 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.390 20:42:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 nvme0n1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 nvme0n1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.332 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.593 nvme0n1 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.593 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.594 20:42:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.855 nvme0n1 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:29.855 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.117 nvme0n1 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.117 nvme0n1 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.117 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.379 nvme0n1 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.379 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.640 nvme0n1 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.640 20:42:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.902 nvme0n1 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.902 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.163 nvme0n1 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.163 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.424 nvme0n1 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.424 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.683 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.684 20:42:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.994 nvme0n1 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:31.994 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.281 nvme0n1 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.281 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.542 nvme0n1 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:32.542 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.543 20:42:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.804 nvme0n1 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:32.804 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.064 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.325 nvme0n1 00:28:33.325 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.326 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.895 nvme0n1 00:28:33.895 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.895 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.895 20:42:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.895 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.895 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.895 20:42:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.895 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.155 nvme0n1 00:28:34.155 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.155 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.155 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.155 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.155 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.155 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.415 20:42:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.675 nvme0n1 00:28:34.675 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.675 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.675 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.675 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.675 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.675 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:34.936 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.196 nvme0n1 00:28:35.196 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.196 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.196 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.196 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.196 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.457 20:42:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.718 nvme0n1 00:28:35.718 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.718 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.718 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.718 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.718 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:35.978 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.549 nvme0n1 00:28:36.549 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.549 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.549 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.549 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.549 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.549 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:36.810 20:42:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.381 nvme0n1 00:28:37.381 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.381 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.381 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.381 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.381 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.381 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.641 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:37.642 20:42:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.211 nvme0n1 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.212 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:38.472 20:42:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.040 nvme0n1 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.040 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.299 20:42:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.300 20:42:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.300 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.300 20:42:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.867 nvme0n1 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.867 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:39.868 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.127 nvme0n1 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.127 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.387 nvme0n1 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.387 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.388 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.648 nvme0n1 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.648 20:42:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.909 nvme0n1 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.909 nvme0n1 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.909 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.168 nvme0n1 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.168 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.169 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.169 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.428 nvme0n1 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.428 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.688 20:42:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.688 nvme0n1 00:28:41.688 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.688 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.688 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.688 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.688 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.688 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.947 nvme0n1 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.947 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.207 nvme0n1 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.207 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.468 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.729 nvme0n1 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.729 20:42:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.990 nvme0n1 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:42.990 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:42.991 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.991 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:42.991 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.252 nvme0n1 00:28:43.252 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.252 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.252 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.252 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.252 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.252 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.512 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.772 nvme0n1 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.772 20:42:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.772 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.031 nvme0n1 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.031 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.032 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.032 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.032 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.032 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:44.032 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.032 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 nvme0n1 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:44.601 20:42:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.171 nvme0n1 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.171 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.741 nvme0n1 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:45.741 20:42:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.312 nvme0n1 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.312 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.573 nvme0n1 00:28:46.573 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.573 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.573 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.573 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.573 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.573 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:46.833 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwZmJmMDM3YjJkZmQ5Yzc2MmJhMTYzMTA3MTBmNmUrehdW: 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: ]] 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjBhYzY2YWVkNmU0ZWY1MjFhZmE4MTM5MTc5Mjg4MjYzYjExZmEwN2EzYmVjZmI2ZWZlYWNhOWIyZjdhZWNjZvyn9B4=: 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.834 20:42:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:46.834 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.404 nvme0n1 00:28:47.404 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.404 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.404 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.404 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.404 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.404 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:47.664 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:47.665 20:42:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.235 nvme0n1 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.235 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmVhMzUwZjc1NGUyN2RhYzQwZTEwZDk2MDlmOWU1YzQ4RM/2: 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: ]] 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmIwNWU3NDNhNWJmNDM3MzUwMDFiNzliNjg3YTAyNDPdkmgZ: 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:48.495 20:42:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.066 nvme0n1 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZGQ4YWI1YTY3OGRmNjEwNzg3ZDc5ZTQ1ZmQyZDQ4NDczZjdiNzM1NjVlMTA4NmJjbwhDQA==: 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: ]] 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTg4OTRiZGNmODE3YmYzN2E2MDU3MTNlYjEyZWViNGJoUfy/: 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.066 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.327 20:42:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.898 nvme0n1 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YmFiNWRjYzUwNjIzN2I1Yjg1YjQ5ZDE2NzVjOWM1YTkwYmViYmE4YTZlZjYzZTI2NWM1MDg5ZjBkZGUwYjY4MAb0t/A=: 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:49.898 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.839 nvme0n1 00:28:50.839 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.839 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.839 20:42:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.839 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.839 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.839 20:42:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NzM2ZDAyMmQwZDljMjExOWM5NjdmYzc2YTI4YzFlODY2NzM4YTRhYWI0MWU3YWRkdtuShQ==: 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NjgyNjg2ODcyZDI1N2QyMTYzODUzYmQwNTIzOTMyZjhmMmM1MjFlOWU5MGVkYjcywgyLPg==: 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.839 request: 00:28:50.839 { 00:28:50.839 "name": "nvme0", 00:28:50.839 "trtype": "tcp", 00:28:50.839 "traddr": "10.0.0.1", 00:28:50.839 "adrfam": "ipv4", 00:28:50.839 "trsvcid": "4420", 00:28:50.839 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:50.839 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:50.839 "prchk_reftag": false, 00:28:50.839 "prchk_guard": false, 00:28:50.839 "hdgst": false, 00:28:50.839 "ddgst": false, 00:28:50.839 "method": "bdev_nvme_attach_controller", 00:28:50.839 "req_id": 1 00:28:50.839 } 00:28:50.839 Got JSON-RPC error response 00:28:50.839 response: 00:28:50.839 { 00:28:50.839 "code": -5, 00:28:50.839 "message": "Input/output error" 00:28:50.839 } 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:50.839 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.840 request: 00:28:50.840 { 00:28:50.840 "name": "nvme0", 00:28:50.840 "trtype": "tcp", 00:28:50.840 "traddr": "10.0.0.1", 00:28:50.840 "adrfam": "ipv4", 00:28:50.840 "trsvcid": "4420", 00:28:50.840 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:50.840 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:50.840 "prchk_reftag": false, 00:28:50.840 "prchk_guard": false, 00:28:50.840 "hdgst": false, 00:28:50.840 "ddgst": false, 00:28:50.840 "dhchap_key": "key2", 00:28:50.840 "method": "bdev_nvme_attach_controller", 00:28:50.840 "req_id": 1 00:28:50.840 } 00:28:50.840 Got JSON-RPC error response 00:28:50.840 response: 00:28:50.840 { 00:28:50.840 "code": -5, 00:28:50.840 "message": "Input/output error" 00:28:50.840 } 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.840 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:51.100 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.101 request: 00:28:51.101 { 00:28:51.101 "name": "nvme0", 00:28:51.101 "trtype": "tcp", 00:28:51.101 "traddr": "10.0.0.1", 00:28:51.101 "adrfam": "ipv4", 00:28:51.101 "trsvcid": "4420", 00:28:51.101 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:51.101 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:51.101 "prchk_reftag": false, 00:28:51.101 "prchk_guard": false, 00:28:51.101 "hdgst": false, 00:28:51.101 "ddgst": false, 00:28:51.101 "dhchap_key": "key1", 00:28:51.101 "dhchap_ctrlr_key": "ckey2", 00:28:51.101 "method": "bdev_nvme_attach_controller", 00:28:51.101 "req_id": 1 00:28:51.101 } 00:28:51.101 Got JSON-RPC error response 00:28:51.101 response: 00:28:51.101 { 00:28:51.101 "code": -5, 00:28:51.101 "message": "Input/output error" 00:28:51.101 } 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:51.101 rmmod nvme_tcp 00:28:51.101 rmmod nvme_fabrics 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1501915 ']' 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1501915 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1501915 ']' 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1501915 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1501915 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1501915' 00:28:51.101 killing process with pid 1501915 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1501915 00:28:51.101 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1501915 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:51.360 20:42:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:53.272 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:53.273 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:53.273 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:28:53.273 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:28:53.534 20:42:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:57.742 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:28:57.742 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:28:57.742 20:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.nvk /tmp/spdk.key-null.fH6 /tmp/spdk.key-sha256.7Gx /tmp/spdk.key-sha384.OGk /tmp/spdk.key-sha512.lc7 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:57.742 20:42:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:01.044 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:29:01.044 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:29:01.044 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:29:01.303 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:29:01.303 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:29:01.303 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:29:01.303 00:29:01.303 real 0m58.857s 00:29:01.303 user 0m51.919s 00:29:01.303 sys 0m16.105s 00:29:01.303 20:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:01.303 20:42:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:01.303 ************************************ 00:29:01.303 END TEST nvmf_auth_host 00:29:01.303 ************************************ 00:29:01.303 20:42:53 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:01.303 20:42:53 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:29:01.303 20:42:53 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:01.303 20:42:53 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:01.303 20:42:53 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:01.303 20:42:53 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:01.303 ************************************ 00:29:01.303 START TEST nvmf_digest 00:29:01.303 ************************************ 00:29:01.303 20:42:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:01.303 * Looking for test storage... 00:29:01.303 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:01.303 20:42:53 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:29:01.595 20:42:53 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:09.775 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:09.775 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:09.775 Found net devices under 0000:31:00.0: cvl_0_0 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:09.775 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:09.776 Found net devices under 0000:31:00.1: cvl_0_1 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:09.776 20:43:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:09.776 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:09.776 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.643 ms 00:29:09.776 00:29:09.776 --- 10.0.0.2 ping statistics --- 00:29:09.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.776 rtt min/avg/max/mdev = 0.643/0.643/0.643/0.000 ms 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:09.776 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:09.776 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:29:09.776 00:29:09.776 --- 10.0.0.1 ping statistics --- 00:29:09.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:09.776 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:09.776 ************************************ 00:29:09.776 START TEST nvmf_digest_clean 00:29:09.776 ************************************ 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1519854 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1519854 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1519854 ']' 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:09.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:09.776 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:10.037 [2024-07-15 20:43:02.193451] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:10.037 [2024-07-15 20:43:02.193515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:10.037 EAL: No free 2048 kB hugepages reported on node 1 00:29:10.037 [2024-07-15 20:43:02.274423] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:10.037 [2024-07-15 20:43:02.347283] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:10.037 [2024-07-15 20:43:02.347323] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:10.037 [2024-07-15 20:43:02.347332] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:10.037 [2024-07-15 20:43:02.347339] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:10.037 [2024-07-15 20:43:02.347346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:10.037 [2024-07-15 20:43:02.347365] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.608 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:10.608 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:10.608 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:10.608 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:10.608 20:43:02 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:10.868 null0 00:29:10.868 [2024-07-15 20:43:03.094159] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:10.868 [2024-07-15 20:43:03.118395] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1520023 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1520023 /var/tmp/bperf.sock 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1520023 ']' 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:10.868 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:10.868 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:10.868 [2024-07-15 20:43:03.172342] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:10.868 [2024-07-15 20:43:03.172391] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520023 ] 00:29:10.868 EAL: No free 2048 kB hugepages reported on node 1 00:29:11.128 [2024-07-15 20:43:03.256324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.128 [2024-07-15 20:43:03.320538] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:11.696 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:11.696 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:11.696 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:11.696 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:11.696 20:43:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:11.956 20:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:11.956 20:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:12.216 nvme0n1 00:29:12.216 20:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:12.216 20:43:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:12.216 Running I/O for 2 seconds... 00:29:14.126 00:29:14.126 Latency(us) 00:29:14.126 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.126 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:14.126 nvme0n1 : 2.00 20665.20 80.72 0.00 0.00 6186.07 3126.61 20425.39 00:29:14.126 =================================================================================================================== 00:29:14.126 Total : 20665.20 80.72 0.00 0.00 6186.07 3126.61 20425.39 00:29:14.126 0 00:29:14.126 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:14.126 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:14.126 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:14.126 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:14.126 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:14.126 | select(.opcode=="crc32c") 00:29:14.126 | "\(.module_name) \(.executed)"' 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1520023 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1520023 ']' 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1520023 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1520023 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1520023' 00:29:14.385 killing process with pid 1520023 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1520023 00:29:14.385 Received shutdown signal, test time was about 2.000000 seconds 00:29:14.385 00:29:14.385 Latency(us) 00:29:14.385 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:14.385 =================================================================================================================== 00:29:14.385 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:14.385 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1520023 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1520703 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1520703 /var/tmp/bperf.sock 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1520703 ']' 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:14.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:14.646 20:43:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:14.646 [2024-07-15 20:43:06.868245] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:14.646 [2024-07-15 20:43:06.868303] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1520703 ] 00:29:14.646 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:14.646 Zero copy mechanism will not be used. 00:29:14.646 EAL: No free 2048 kB hugepages reported on node 1 00:29:14.646 [2024-07-15 20:43:06.948887] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:14.646 [2024-07-15 20:43:07.002644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.608 20:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:15.608 20:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:15.608 20:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:15.608 20:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:15.608 20:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:15.608 20:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.608 20:43:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:15.866 nvme0n1 00:29:15.866 20:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:15.867 20:43:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:15.867 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:15.867 Zero copy mechanism will not be used. 00:29:15.867 Running I/O for 2 seconds... 00:29:17.776 00:29:17.776 Latency(us) 00:29:17.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:17.776 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:17.776 nvme0n1 : 2.00 3025.80 378.23 0.00 0.00 5285.36 1208.32 14199.47 00:29:17.776 =================================================================================================================== 00:29:17.776 Total : 3025.80 378.23 0.00 0.00 5285.36 1208.32 14199.47 00:29:17.776 0 00:29:17.776 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:17.776 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:17.776 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:17.776 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:17.776 | select(.opcode=="crc32c") 00:29:17.776 | "\(.module_name) \(.executed)"' 00:29:17.776 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1520703 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1520703 ']' 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1520703 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1520703 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1520703' 00:29:18.037 killing process with pid 1520703 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1520703 00:29:18.037 Received shutdown signal, test time was about 2.000000 seconds 00:29:18.037 00:29:18.037 Latency(us) 00:29:18.037 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:18.037 =================================================================================================================== 00:29:18.037 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:18.037 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1520703 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1521392 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1521392 /var/tmp/bperf.sock 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1521392 ']' 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:18.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:18.298 20:43:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:18.298 [2024-07-15 20:43:10.522718] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:18.298 [2024-07-15 20:43:10.522772] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1521392 ] 00:29:18.298 EAL: No free 2048 kB hugepages reported on node 1 00:29:18.298 [2024-07-15 20:43:10.602626] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:18.298 [2024-07-15 20:43:10.655574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:19.238 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:19.238 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:19.238 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:19.238 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:19.238 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:19.238 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.238 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:19.808 nvme0n1 00:29:19.808 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:19.808 20:43:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:19.808 Running I/O for 2 seconds... 00:29:21.716 00:29:21.716 Latency(us) 00:29:21.716 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.716 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:21.716 nvme0n1 : 2.00 22010.31 85.98 0.00 0.00 5807.59 3631.79 15400.96 00:29:21.716 =================================================================================================================== 00:29:21.716 Total : 22010.31 85.98 0.00 0.00 5807.59 3631.79 15400.96 00:29:21.716 0 00:29:21.716 20:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:21.716 20:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:21.716 20:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:21.716 20:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:21.716 | select(.opcode=="crc32c") 00:29:21.716 | "\(.module_name) \(.executed)"' 00:29:21.716 20:43:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1521392 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1521392 ']' 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1521392 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1521392 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1521392' 00:29:21.976 killing process with pid 1521392 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1521392 00:29:21.976 Received shutdown signal, test time was about 2.000000 seconds 00:29:21.976 00:29:21.976 Latency(us) 00:29:21.976 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:21.976 =================================================================================================================== 00:29:21.976 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1521392 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1522218 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1522218 /var/tmp/bperf.sock 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1522218 ']' 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:21.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:21.976 20:43:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:22.236 [2024-07-15 20:43:14.370030] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:22.236 [2024-07-15 20:43:14.370087] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1522218 ] 00:29:22.236 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:22.236 Zero copy mechanism will not be used. 00:29:22.236 EAL: No free 2048 kB hugepages reported on node 1 00:29:22.236 [2024-07-15 20:43:14.452005] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:22.236 [2024-07-15 20:43:14.505192] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:22.806 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:22.806 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:29:22.806 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:22.806 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:22.806 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:23.067 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.067 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:23.639 nvme0n1 00:29:23.639 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:23.639 20:43:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:23.639 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:23.639 Zero copy mechanism will not be used. 00:29:23.639 Running I/O for 2 seconds... 00:29:25.551 00:29:25.551 Latency(us) 00:29:25.551 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.551 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:25.551 nvme0n1 : 2.00 3488.17 436.02 0.00 0.00 4579.51 1993.39 13216.43 00:29:25.551 =================================================================================================================== 00:29:25.551 Total : 3488.17 436.02 0.00 0.00 4579.51 1993.39 13216.43 00:29:25.551 0 00:29:25.551 20:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:25.551 20:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:25.551 20:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:25.551 20:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:25.552 20:43:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:25.552 | select(.opcode=="crc32c") 00:29:25.552 | "\(.module_name) \(.executed)"' 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1522218 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1522218 ']' 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1522218 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1522218 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1522218' 00:29:25.813 killing process with pid 1522218 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1522218 00:29:25.813 Received shutdown signal, test time was about 2.000000 seconds 00:29:25.813 00:29:25.813 Latency(us) 00:29:25.813 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:25.813 =================================================================================================================== 00:29:25.813 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:25.813 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1522218 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1519854 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1519854 ']' 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1519854 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1519854 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1519854' 00:29:26.074 killing process with pid 1519854 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1519854 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1519854 00:29:26.074 00:29:26.074 real 0m16.268s 00:29:26.074 user 0m31.967s 00:29:26.074 sys 0m3.256s 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:26.074 ************************************ 00:29:26.074 END TEST nvmf_digest_clean 00:29:26.074 ************************************ 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:26.074 20:43:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:26.333 ************************************ 00:29:26.333 START TEST nvmf_digest_error 00:29:26.333 ************************************ 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1523102 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1523102 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1523102 ']' 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:26.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:26.333 20:43:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:26.333 [2024-07-15 20:43:18.543909] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:26.333 [2024-07-15 20:43:18.543964] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:26.333 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.333 [2024-07-15 20:43:18.619937] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:26.333 [2024-07-15 20:43:18.688835] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:26.333 [2024-07-15 20:43:18.688873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:26.333 [2024-07-15 20:43:18.688880] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:26.333 [2024-07-15 20:43:18.688887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:26.333 [2024-07-15 20:43:18.688892] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:26.333 [2024-07-15 20:43:18.688910] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.273 [2024-07-15 20:43:19.338782] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.273 null0 00:29:27.273 [2024-07-15 20:43:19.419593] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:27.273 [2024-07-15 20:43:19.443779] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1523219 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1523219 /var/tmp/bperf.sock 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1523219 ']' 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:27.273 20:43:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:27.273 [2024-07-15 20:43:19.499323] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:27.273 [2024-07-15 20:43:19.499371] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523219 ] 00:29:27.273 EAL: No free 2048 kB hugepages reported on node 1 00:29:27.273 [2024-07-15 20:43:19.577741] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.273 [2024-07-15 20:43:19.631664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.212 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.473 nvme0n1 00:29:28.473 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:28.473 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:28.473 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:28.473 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:28.473 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:28.473 20:43:20 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.473 Running I/O for 2 seconds... 00:29:28.473 [2024-07-15 20:43:20.790017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.473 [2024-07-15 20:43:20.790048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.473 [2024-07-15 20:43:20.790058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.473 [2024-07-15 20:43:20.803128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.473 [2024-07-15 20:43:20.803148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:17805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.473 [2024-07-15 20:43:20.803155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.473 [2024-07-15 20:43:20.816041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.473 [2024-07-15 20:43:20.816060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:19348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.473 [2024-07-15 20:43:20.816067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.473 [2024-07-15 20:43:20.829450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.473 [2024-07-15 20:43:20.829469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:25393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.473 [2024-07-15 20:43:20.829476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.473 [2024-07-15 20:43:20.841948] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.473 [2024-07-15 20:43:20.841967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.473 [2024-07-15 20:43:20.841974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.735 [2024-07-15 20:43:20.854267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.735 [2024-07-15 20:43:20.854286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.735 [2024-07-15 20:43:20.854292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.735 [2024-07-15 20:43:20.867075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.735 [2024-07-15 20:43:20.867094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.735 [2024-07-15 20:43:20.867101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.735 [2024-07-15 20:43:20.878961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.735 [2024-07-15 20:43:20.878979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:6159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.878986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.890395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.890418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.890425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.903390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.903408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.903415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.915906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.915924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.915931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.927517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.927536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.927542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.940708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.940726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.940733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.952541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.952559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.952566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.964547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.964565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:17552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.964572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.976831] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.976850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:6624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.976857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:20.989685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:20.989703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:17017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:20.989710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.002001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.002020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5765 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.002026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.015336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.015355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.015361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.025896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.025914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.025921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.039586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.039604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.039611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.051308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.051326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.051333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.063091] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.063109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:18214 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.063115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.075879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.075897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.075903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.087434] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.087452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.087459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.100666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.100684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:18099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.100694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.736 [2024-07-15 20:43:21.112771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.736 [2024-07-15 20:43:21.112789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:24648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.736 [2024-07-15 20:43:21.112796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.124188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.124206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:20048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.124212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.137338] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.137356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11845 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.137363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.150135] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.150153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.150160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.161554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.161572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:20728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.161579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.174295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.174314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.174320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.186074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.186092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:6797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.186099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.197533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.197551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.197558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.209608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.209626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:15410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.209632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.223854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.223872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:12817 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.223879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.234611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.234629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14476 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.234636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.245929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.245947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.245953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.260155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.260173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:13602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.260180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.998 [2024-07-15 20:43:21.271868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.998 [2024-07-15 20:43:21.271886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.998 [2024-07-15 20:43:21.271893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.282611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.282629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:25290 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.282636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.295596] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.295614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.295620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.308743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.308762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.308771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.322199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.322217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.322223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.333012] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.333030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.333037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.347035] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.347053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:3759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.347059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.359293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.359311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:13758 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.359318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:28.999 [2024-07-15 20:43:21.371773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:28.999 [2024-07-15 20:43:21.371792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:7193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:28.999 [2024-07-15 20:43:21.371798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.382134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.382152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.260 [2024-07-15 20:43:21.382158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.395712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.395730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:21463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.260 [2024-07-15 20:43:21.395737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.406339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.406357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:25267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.260 [2024-07-15 20:43:21.406364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.418543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.418564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.260 [2024-07-15 20:43:21.418571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.432817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.432835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16868 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.260 [2024-07-15 20:43:21.432842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.443615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.443632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:24103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.260 [2024-07-15 20:43:21.443639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.457287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.457305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.260 [2024-07-15 20:43:21.457312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.260 [2024-07-15 20:43:21.468390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.260 [2024-07-15 20:43:21.468407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.468414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.481805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.481823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:6137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.481830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.493212] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.493233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24287 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.493240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.505895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.505914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.505920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.518487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.518506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:21566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.518512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.529751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.529769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:17581 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.529776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.542385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.542403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:2540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.542410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.554862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.554879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:12227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.554885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.565924] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.565942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:8157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.565948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.579892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.579913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.579919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.592190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.592208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:8177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.592214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.602839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.602857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.602864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.615374] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.615392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:23780 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.615399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.261 [2024-07-15 20:43:21.628861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.261 [2024-07-15 20:43:21.628879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:7598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.261 [2024-07-15 20:43:21.628889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.522 [2024-07-15 20:43:21.640760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.522 [2024-07-15 20:43:21.640778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:19517 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.522 [2024-07-15 20:43:21.640785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.522 [2024-07-15 20:43:21.653130] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.522 [2024-07-15 20:43:21.653148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:9315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.522 [2024-07-15 20:43:21.653155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.665960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.665978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:22331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.665985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.677705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.677723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:7891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.677729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.690680] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.690698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.690704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.702260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.702277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:5193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.702284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.714369] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.714386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:1658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.714392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.726000] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.726017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.726024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.738858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.738879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:19027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.738886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.749345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.749364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:10216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.749372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.763200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.763218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.763225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.774895] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.774913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:16998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.774920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.789146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.789163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.789170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.798781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.798799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.798805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.812435] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.812452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.812459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.824612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.824630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:17099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.824637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.836780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.836798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:3178 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.836808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.849951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.849967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:4971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.849974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.861690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.861707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:9946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.861714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.873765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.873782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:8760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.873789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.887156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.887174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10188 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.887180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.523 [2024-07-15 20:43:21.899257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.523 [2024-07-15 20:43:21.899274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.523 [2024-07-15 20:43:21.899281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.910597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.910616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.910622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.924311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.924328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:15007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.924335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.935411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.935429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.935435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.947253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.947272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:17161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.947279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.958699] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.958716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:5139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.958723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.972573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.972590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:23844 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.972597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.983823] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.983841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.983847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:21.995604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:21.995621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:21.995627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:22.008641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:22.008659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:24235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:22.008665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:22.020591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:22.020608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:22.020615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:22.032104] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.784 [2024-07-15 20:43:22.032121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:16529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.784 [2024-07-15 20:43:22.032128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.784 [2024-07-15 20:43:22.045663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.045680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.045686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.056682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.056700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.056706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.069128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.069146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.069153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.081839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.081856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.081863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.093988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.094005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.094012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.106318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.106336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.106342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.117395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.117413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1691 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.117419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.129740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.129757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.129764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.143382] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.143399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:25099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.143406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:29.785 [2024-07-15 20:43:22.155943] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:29.785 [2024-07-15 20:43:22.155960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:12357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:29.785 [2024-07-15 20:43:22.155969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.058 [2024-07-15 20:43:22.167196] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.058 [2024-07-15 20:43:22.167214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.058 [2024-07-15 20:43:22.167220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.058 [2024-07-15 20:43:22.178636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.058 [2024-07-15 20:43:22.178654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:23684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.058 [2024-07-15 20:43:22.178661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.058 [2024-07-15 20:43:22.193017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.058 [2024-07-15 20:43:22.193035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:14618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.058 [2024-07-15 20:43:22.193041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.058 [2024-07-15 20:43:22.205898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.058 [2024-07-15 20:43:22.205916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.058 [2024-07-15 20:43:22.205922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.058 [2024-07-15 20:43:22.217769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.058 [2024-07-15 20:43:22.217786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:20606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.217793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.231017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.231035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.231041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.242923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.242940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.242947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.253452] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.253470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23094 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.253476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.266646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.266666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.266673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.278504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.278522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:18557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.278529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.290909] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.290927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.290934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.302517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.302534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.302541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.314707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.314725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:4891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.314731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.328260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.328278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:13625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.328284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.340211] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.340228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:11723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.340238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.352626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.352644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:21597 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.352650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.364198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.364215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.364222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.378447] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.378464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14434 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.378470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.388959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.388976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.388983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.403817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.403835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:23140 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.403842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.415743] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.415761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5843 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.415768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.059 [2024-07-15 20:43:22.426588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.059 [2024-07-15 20:43:22.426605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.059 [2024-07-15 20:43:22.426611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.439235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.439253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10392 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.439260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.452340] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.452358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:9652 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.452364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.464516] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.464533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:23654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.464540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.475153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.475170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.475181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.487979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.487996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:5353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.488003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.501506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.501524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:6371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.501530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.513633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.513650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:7935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.513657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.526010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.526027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.526034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.537247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.537264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:3882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.537271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.549605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.549622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.549629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.563488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.563506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:16895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.563513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.320 [2024-07-15 20:43:22.576534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.320 [2024-07-15 20:43:22.576551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3552 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.320 [2024-07-15 20:43:22.576558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.589636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.589654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:5014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.589660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.600807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.600825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.600831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.613388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.613406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:6651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.613412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.625892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.625910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:13211 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.625916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.637588] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.637606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.637613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.649620] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.649638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:4326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.649644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.661847] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.661865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.661871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.674568] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.674585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:1009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.674592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.685303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.685320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:21201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.685330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.321 [2024-07-15 20:43:22.699034] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.321 [2024-07-15 20:43:22.699052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.321 [2024-07-15 20:43:22.699058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.582 [2024-07-15 20:43:22.712327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.582 [2024-07-15 20:43:22.712346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:10532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.582 [2024-07-15 20:43:22.712353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.582 [2024-07-15 20:43:22.723841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.582 [2024-07-15 20:43:22.723859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:14163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.582 [2024-07-15 20:43:22.723865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.582 [2024-07-15 20:43:22.736055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.582 [2024-07-15 20:43:22.736074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:7943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.582 [2024-07-15 20:43:22.736081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.582 [2024-07-15 20:43:22.747967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.582 [2024-07-15 20:43:22.747985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:12216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.582 [2024-07-15 20:43:22.747991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.582 [2024-07-15 20:43:22.760277] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.582 [2024-07-15 20:43:22.760296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:20839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.582 [2024-07-15 20:43:22.760302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.582 [2024-07-15 20:43:22.771764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1969910) 00:29:30.582 [2024-07-15 20:43:22.771782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:20191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.582 [2024-07-15 20:43:22.771789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:30.582 00:29:30.582 Latency(us) 00:29:30.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.582 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:30.582 nvme0n1 : 2.00 20738.51 81.01 0.00 0.00 6166.15 2048.00 17803.95 00:29:30.582 =================================================================================================================== 00:29:30.582 Total : 20738.51 81.01 0.00 0.00 6166.15 2048.00 17803.95 00:29:30.582 0 00:29:30.582 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:30.582 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:30.582 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:30.582 | .driver_specific 00:29:30.582 | .nvme_error 00:29:30.582 | .status_code 00:29:30.582 | .command_transient_transport_error' 00:29:30.582 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:30.843 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:29:30.843 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1523219 00:29:30.843 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1523219 ']' 00:29:30.843 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1523219 00:29:30.843 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:30.843 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:30.843 20:43:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1523219 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1523219' 00:29:30.843 killing process with pid 1523219 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1523219 00:29:30.843 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.843 00:29:30.843 Latency(us) 00:29:30.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.843 =================================================================================================================== 00:29:30.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1523219 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1523979 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1523979 /var/tmp/bperf.sock 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1523979 ']' 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:30.843 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:30.843 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.843 [2024-07-15 20:43:23.182797] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:30.843 [2024-07-15 20:43:23.182855] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1523979 ] 00:29:30.843 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:30.843 Zero copy mechanism will not be used. 00:29:30.843 EAL: No free 2048 kB hugepages reported on node 1 00:29:31.104 [2024-07-15 20:43:23.262434] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.104 [2024-07-15 20:43:23.315844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.674 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:31.674 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:31.674 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.674 20:43:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.934 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:31.934 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:31.934 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.934 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:31.934 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.934 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.194 nvme0n1 00:29:32.194 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:32.194 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:32.194 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.194 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:32.194 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:32.194 20:43:24 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.194 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:32.194 Zero copy mechanism will not be used. 00:29:32.194 Running I/O for 2 seconds... 00:29:32.194 [2024-07-15 20:43:24.501466] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.194 [2024-07-15 20:43:24.501498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-07-15 20:43:24.501506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.194 [2024-07-15 20:43:24.511589] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.194 [2024-07-15 20:43:24.511611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-07-15 20:43:24.511618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.194 [2024-07-15 20:43:24.520853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.194 [2024-07-15 20:43:24.520873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-07-15 20:43:24.520880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.194 [2024-07-15 20:43:24.531015] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.194 [2024-07-15 20:43:24.531034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-07-15 20:43:24.531041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.194 [2024-07-15 20:43:24.541859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.194 [2024-07-15 20:43:24.541878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-07-15 20:43:24.541885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.194 [2024-07-15 20:43:24.553252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.194 [2024-07-15 20:43:24.553271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-07-15 20:43:24.553278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.194 [2024-07-15 20:43:24.564096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.194 [2024-07-15 20:43:24.564115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.194 [2024-07-15 20:43:24.564121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.574351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.574369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.574376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.586463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.586483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.586489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.595958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.595977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.595983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.605817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.605836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.605846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.616950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.616968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.616974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.628449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.628468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.628474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.639853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.639871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.639877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.650696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.650714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.650721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.660523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.660542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.660549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.669651] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.669670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.669677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.680929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.680947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.680954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.690440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.690459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.690465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.699444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.455 [2024-07-15 20:43:24.699465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.455 [2024-07-15 20:43:24.699472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.455 [2024-07-15 20:43:24.709669] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.709687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.709693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.720688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.720706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.720713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.732146] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.732165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.732171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.744064] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.744083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.744089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.752967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.752986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.752992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.763920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.763939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.763945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.774861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.774880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.774886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.786341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.786359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.786366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.799031] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.799050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.799056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.808496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.808515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.808521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.819141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.819161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.819167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.456 [2024-07-15 20:43:24.831715] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.456 [2024-07-15 20:43:24.831734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.456 [2024-07-15 20:43:24.831741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.717 [2024-07-15 20:43:24.842462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.717 [2024-07-15 20:43:24.842481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.717 [2024-07-15 20:43:24.842488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.717 [2024-07-15 20:43:24.852443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.717 [2024-07-15 20:43:24.852462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.717 [2024-07-15 20:43:24.852469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.717 [2024-07-15 20:43:24.863814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.717 [2024-07-15 20:43:24.863833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.717 [2024-07-15 20:43:24.863840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.717 [2024-07-15 20:43:24.875150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.717 [2024-07-15 20:43:24.875169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.717 [2024-07-15 20:43:24.875176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.717 [2024-07-15 20:43:24.885841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.885860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.885869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.897695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.897714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.897720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.908894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.908912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.908919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.919395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.919413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.919420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.930811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.930829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.930837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.939915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.939933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.939939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.950081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.950099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.950106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.960602] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.960621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.960627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.971065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.971083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.971090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.981487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.981508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.981514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:24.992750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:24.992768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:24.992774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.002692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.002709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.002716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.013362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.013381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.013387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.024080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.024098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.024105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.034389] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.034407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.034413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.044804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.044823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.044830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.056001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.056020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.056026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.069069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.069087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.069097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.082416] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.082435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.082441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.718 [2024-07-15 20:43:25.095223] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.718 [2024-07-15 20:43:25.095246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.718 [2024-07-15 20:43:25.095253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.979 [2024-07-15 20:43:25.107323] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.979 [2024-07-15 20:43:25.107341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.979 [2024-07-15 20:43:25.107347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.979 [2024-07-15 20:43:25.118032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.979 [2024-07-15 20:43:25.118051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.979 [2024-07-15 20:43:25.118057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.979 [2024-07-15 20:43:25.128851] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.979 [2024-07-15 20:43:25.128869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.979 [2024-07-15 20:43:25.128876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.979 [2024-07-15 20:43:25.138957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.138976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.138982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.150666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.150684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.150690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.161413] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.161432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.161438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.172782] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.172803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.172809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.183712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.183730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.183736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.195037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.195055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.195062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.205553] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.205571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.205577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.215938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.215956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.215962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.225882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.225899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.225906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.237494] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.237512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.237518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.247963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.247982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.247988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.258529] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.258547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.258553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.269335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.269353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.269359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.280038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.280055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.280062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.290250] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.290268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.290274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.301162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.301180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.301186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.310642] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.310660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.310666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.321861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.321880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.321886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.334727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.334745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.334751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.345112] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.345130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.345137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:32.980 [2024-07-15 20:43:25.355159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:32.980 [2024-07-15 20:43:25.355177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.980 [2024-07-15 20:43:25.355186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.364968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.364987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.364993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.374921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.374940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.374946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.385960] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.385978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.385984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.396349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.396366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.396373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.407110] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.407127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.407134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.417968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.417987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.417993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.430188] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.430206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.430212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.441167] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.441184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.241 [2024-07-15 20:43:25.441190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.241 [2024-07-15 20:43:25.451692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.241 [2024-07-15 20:43:25.451713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.451719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.461483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.461500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.461506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.472855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.472873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.472879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.484297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.484314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.484320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.496626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.496644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.496650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.509312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.509330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.509336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.523488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.523506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.523512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.536772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.536789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.536796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.548059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.548077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.548084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.558160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.558177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.558184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.569952] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.569969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.569976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.581381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.581399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.581405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.591696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.591714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.591720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.602786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.602804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.602810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.242 [2024-07-15 20:43:25.612251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.242 [2024-07-15 20:43:25.612269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.242 [2024-07-15 20:43:25.612275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.622052] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.622070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.622076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.631791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.631809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.631815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.642127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.642147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.642154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.652807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.652826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.652832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.664081] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.664100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.664106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.673911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.673929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.673935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.686143] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.686161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.686167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.696538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.696556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.696562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.706444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.706463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.706470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.715677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.715696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.715702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.726251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.726269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.726275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.736646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.736665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.736671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.747580] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.747599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.747605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.758042] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.758061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.758067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.770789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.770808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.770815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.781921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.781939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.781946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.792692] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.792710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.792717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.801481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.801500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.801506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.813480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.813499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.813505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.825674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.825692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.825702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.835908] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.835926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.835932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.845216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.845239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.845246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.852986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.853004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.853010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.861977] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.861996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.503 [2024-07-15 20:43:25.862002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.503 [2024-07-15 20:43:25.872362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.503 [2024-07-15 20:43:25.872381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.504 [2024-07-15 20:43:25.872387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.882351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.882371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.882377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.893183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.893202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.893208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.904165] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.904184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.904190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.915162] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.915184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.915190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.924756] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.924775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.924781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.935085] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.935104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.935110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.944242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.944260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.944267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.953838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.953857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.953863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.964683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.964703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.964709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.974751] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.974770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.974776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.987139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.987158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.987164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:25.997817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:25.997836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:25.997842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:26.007566] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:26.007585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:26.007592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:26.018131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.763 [2024-07-15 20:43:26.018149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.763 [2024-07-15 20:43:26.018156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.763 [2024-07-15 20:43:26.029923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.029942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.029948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.040929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.040948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.040955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.053242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.053261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.053267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.066677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.066696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.066702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.080383] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.080402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.080408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.093802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.093822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.093828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.107158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.107177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.107187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.120622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.120642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.120648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:33.764 [2024-07-15 20:43:26.133861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:33.764 [2024-07-15 20:43:26.133880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.764 [2024-07-15 20:43:26.133886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.147406] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.147426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.147432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.162484] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.162503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.162509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.174534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.174553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.174560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.186507] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.186525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.186532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.199053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.199073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.199080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.212376] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.212395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.212401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.225504] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.225526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.225532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.235810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.235829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.235835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.245987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.246006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.246012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.255881] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.255900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.255907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.266754] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.266774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.266780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.276372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.276391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.276398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.286672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.286691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.286697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.296209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.296236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.296243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.306731] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.306750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.306756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.316786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.316805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.316812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.329053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.329074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.329080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.339244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.339264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.339271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.350695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.350714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.350721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.361103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.361123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.361130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.372417] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.372437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.372443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.383420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.383440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.383446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.024 [2024-07-15 20:43:26.395814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.024 [2024-07-15 20:43:26.395834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.024 [2024-07-15 20:43:26.395841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.406627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.406646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.406656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.417368] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.417388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.417394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.429958] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.429978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.429985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.441325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.441344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.441350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.452057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.452076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.452082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.463635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.463654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.463660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.474377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.474397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.474403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:34.285 [2024-07-15 20:43:26.485739] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x862340) 00:29:34.285 [2024-07-15 20:43:26.485759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.285 [2024-07-15 20:43:26.485765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:34.285 00:29:34.285 Latency(us) 00:29:34.285 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.285 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:34.285 nvme0n1 : 2.00 2828.86 353.61 0.00 0.00 5653.00 1235.63 14854.83 00:29:34.285 =================================================================================================================== 00:29:34.285 Total : 2828.86 353.61 0.00 0.00 5653.00 1235.63 14854.83 00:29:34.285 0 00:29:34.285 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:34.285 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:34.285 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:34.285 | .driver_specific 00:29:34.285 | .nvme_error 00:29:34.285 | .status_code 00:29:34.285 | .command_transient_transport_error' 00:29:34.285 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 182 > 0 )) 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1523979 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1523979 ']' 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1523979 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1523979 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1523979' 00:29:34.545 killing process with pid 1523979 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1523979 00:29:34.545 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.545 00:29:34.545 Latency(us) 00:29:34.545 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.545 =================================================================================================================== 00:29:34.545 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1523979 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1524739 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1524739 /var/tmp/bperf.sock 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1524739 ']' 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.545 20:43:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.545 [2024-07-15 20:43:26.900974] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:34.545 [2024-07-15 20:43:26.901050] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1524739 ] 00:29:34.805 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.805 [2024-07-15 20:43:26.981417] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.805 [2024-07-15 20:43:27.034655] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:35.373 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.373 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:35.373 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.373 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.663 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:35.663 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.663 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.663 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.663 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.663 20:43:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.953 nvme0n1 00:29:35.954 20:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:35.954 20:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.954 20:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.954 20:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:35.954 20:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:35.954 20:43:28 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.954 Running I/O for 2 seconds... 00:29:35.954 [2024-07-15 20:43:28.257111] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e95a0 00:29:35.954 [2024-07-15 20:43:28.258755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:10594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.954 [2024-07-15 20:43:28.258783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.954 [2024-07-15 20:43:28.266931] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f2510 00:29:35.954 [2024-07-15 20:43:28.268011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:15407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.954 [2024-07-15 20:43:28.268028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:35.954 [2024-07-15 20:43:28.279568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f1430 00:29:35.954 [2024-07-15 20:43:28.280612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.954 [2024-07-15 20:43:28.280630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.954 [2024-07-15 20:43:28.291399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f0350 00:29:35.954 [2024-07-15 20:43:28.292431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:13681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.954 [2024-07-15 20:43:28.292449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.954 [2024-07-15 20:43:28.303252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190ef270 00:29:35.954 [2024-07-15 20:43:28.304318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:15051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.954 [2024-07-15 20:43:28.304335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:35.954 [2024-07-15 20:43:28.315086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190ee190 00:29:35.954 [2024-07-15 20:43:28.316021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:35.954 [2024-07-15 20:43:28.316037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.326916] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6cc8 00:29:36.229 [2024-07-15 20:43:28.327966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:10733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.327982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.338743] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7da8 00:29:36.229 [2024-07-15 20:43:28.339794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.339811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.350569] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8e88 00:29:36.229 [2024-07-15 20:43:28.351656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:12883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.351672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.362397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9f68 00:29:36.229 [2024-07-15 20:43:28.363480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:15151 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.363496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.374183] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fb048 00:29:36.229 [2024-07-15 20:43:28.375275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:23077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.375292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.385984] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f2d80 00:29:36.229 [2024-07-15 20:43:28.387069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.387086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.397799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e95a0 00:29:36.229 [2024-07-15 20:43:28.398884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:14828 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.398901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.409610] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e84c0 00:29:36.229 [2024-07-15 20:43:28.410691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.410709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.421402] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e73e0 00:29:36.229 [2024-07-15 20:43:28.422481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20928 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.229 [2024-07-15 20:43:28.422498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.229 [2024-07-15 20:43:28.433152] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6300 00:29:36.230 [2024-07-15 20:43:28.434237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.434254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.444942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5220 00:29:36.230 [2024-07-15 20:43:28.446024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:17784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.446041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.456755] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:36.230 [2024-07-15 20:43:28.457845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22714 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.457861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.468568] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3060 00:29:36.230 [2024-07-15 20:43:28.469653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.469670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.480375] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1f80 00:29:36.230 [2024-07-15 20:43:28.481429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:11376 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.481449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.492121] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0ea0 00:29:36.230 [2024-07-15 20:43:28.493219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:20679 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.493238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.503928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f57b0 00:29:36.230 [2024-07-15 20:43:28.505002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:8091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.505019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.515713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6890 00:29:36.230 [2024-07-15 20:43:28.516812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.516829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.527535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7970 00:29:36.230 [2024-07-15 20:43:28.528630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.528647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.539431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8a50 00:29:36.230 [2024-07-15 20:43:28.540524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:17819 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.540541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.551245] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9b30 00:29:36.230 [2024-07-15 20:43:28.552327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:20731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.552344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.563047] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fac10 00:29:36.230 [2024-07-15 20:43:28.564133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:23702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.564150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.574875] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f31b8 00:29:36.230 [2024-07-15 20:43:28.575965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:3776 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.575981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.586665] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190ea248 00:29:36.230 [2024-07-15 20:43:28.587746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:16807 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.587766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.230 [2024-07-15 20:43:28.598469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e9168 00:29:36.230 [2024-07-15 20:43:28.599559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.230 [2024-07-15 20:43:28.599576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.610254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e8088 00:29:36.490 [2024-07-15 20:43:28.611326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24606 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.490 [2024-07-15 20:43:28.611343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.622042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6fa8 00:29:36.490 [2024-07-15 20:43:28.623113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.490 [2024-07-15 20:43:28.623129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.633843] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5ec8 00:29:36.490 [2024-07-15 20:43:28.634927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:15498 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.490 [2024-07-15 20:43:28.634944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.645648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4de8 00:29:36.490 [2024-07-15 20:43:28.646733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:21280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.490 [2024-07-15 20:43:28.646750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.657483] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3d08 00:29:36.490 [2024-07-15 20:43:28.658579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:16361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.490 [2024-07-15 20:43:28.658595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.669289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e2c28 00:29:36.490 [2024-07-15 20:43:28.670371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.490 [2024-07-15 20:43:28.670387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.681097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1b48 00:29:36.490 [2024-07-15 20:43:28.682183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:3544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.490 [2024-07-15 20:43:28.682200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.490 [2024-07-15 20:43:28.692909] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0a68 00:29:36.490 [2024-07-15 20:43:28.693971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.693988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.704711] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f5be8 00:29:36.491 [2024-07-15 20:43:28.705797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.705813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.716525] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6cc8 00:29:36.491 [2024-07-15 20:43:28.717623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.717640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.728396] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7da8 00:29:36.491 [2024-07-15 20:43:28.729484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:2833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.729501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.740170] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8e88 00:29:36.491 [2024-07-15 20:43:28.741260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:11604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.741277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.751960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9f68 00:29:36.491 [2024-07-15 20:43:28.753040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.753057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.763781] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fb048 00:29:36.491 [2024-07-15 20:43:28.764882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:21407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.764899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.775582] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f2d80 00:29:36.491 [2024-07-15 20:43:28.776652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:4519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.776668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.787362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e95a0 00:29:36.491 [2024-07-15 20:43:28.788454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:4395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.788470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.799156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e84c0 00:29:36.491 [2024-07-15 20:43:28.800241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.800257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.810961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e73e0 00:29:36.491 [2024-07-15 20:43:28.812049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13578 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.812065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.822748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6300 00:29:36.491 [2024-07-15 20:43:28.823819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:4347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.823835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.834538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5220 00:29:36.491 [2024-07-15 20:43:28.835625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:24771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.835642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.846325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:36.491 [2024-07-15 20:43:28.847415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:16261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.847432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.491 [2024-07-15 20:43:28.858149] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3060 00:29:36.491 [2024-07-15 20:43:28.859237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:7430 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.491 [2024-07-15 20:43:28.859254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.869949] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1f80 00:29:36.750 [2024-07-15 20:43:28.871038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6239 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.750 [2024-07-15 20:43:28.871054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.881761] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0ea0 00:29:36.750 [2024-07-15 20:43:28.882842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:14680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.750 [2024-07-15 20:43:28.882858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.893564] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f57b0 00:29:36.750 [2024-07-15 20:43:28.894650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:23599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.750 [2024-07-15 20:43:28.894669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.905357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6890 00:29:36.750 [2024-07-15 20:43:28.906455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:19143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.750 [2024-07-15 20:43:28.906472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.917134] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7970 00:29:36.750 [2024-07-15 20:43:28.918221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.750 [2024-07-15 20:43:28.918240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.928930] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8a50 00:29:36.750 [2024-07-15 20:43:28.930019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.750 [2024-07-15 20:43:28.930036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.940733] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9b30 00:29:36.750 [2024-07-15 20:43:28.941831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:3556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.750 [2024-07-15 20:43:28.941848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.750 [2024-07-15 20:43:28.952557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fac10 00:29:36.750 [2024-07-15 20:43:28.953650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10741 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:28.953667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:28.964371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f31b8 00:29:36.751 [2024-07-15 20:43:28.965468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:28.965484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:28.976136] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190ea248 00:29:36.751 [2024-07-15 20:43:28.977239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:28.977256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:28.987937] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e9168 00:29:36.751 [2024-07-15 20:43:28.989025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:28.989041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:28.999735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e8088 00:29:36.751 [2024-07-15 20:43:29.000825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12452 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.000842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.011543] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6fa8 00:29:36.751 [2024-07-15 20:43:29.012627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.012644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.023317] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5ec8 00:29:36.751 [2024-07-15 20:43:29.024402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:15105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.024420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.035103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4de8 00:29:36.751 [2024-07-15 20:43:29.036186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:15113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.036202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.046902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3d08 00:29:36.751 [2024-07-15 20:43:29.047988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:2395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.048005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.058705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e2c28 00:29:36.751 [2024-07-15 20:43:29.059789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:17083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.059805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.070494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1b48 00:29:36.751 [2024-07-15 20:43:29.071565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:10932 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.071581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.082254] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0a68 00:29:36.751 [2024-07-15 20:43:29.083340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.083356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.094038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f5be8 00:29:36.751 [2024-07-15 20:43:29.095125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:18702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.095141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.105961] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6cc8 00:29:36.751 [2024-07-15 20:43:29.107054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.107071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.117736] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7da8 00:29:36.751 [2024-07-15 20:43:29.118832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:36.751 [2024-07-15 20:43:29.118849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:36.751 [2024-07-15 20:43:29.129514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8e88 00:29:37.012 [2024-07-15 20:43:29.130596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:6530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.130614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.141314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9f68 00:29:37.012 [2024-07-15 20:43:29.142409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:2339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.142425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.153305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fb048 00:29:37.012 [2024-07-15 20:43:29.154376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.154392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.165075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f2d80 00:29:37.012 [2024-07-15 20:43:29.166168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.166184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.176884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e95a0 00:29:37.012 [2024-07-15 20:43:29.177971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.177988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.188671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e84c0 00:29:37.012 [2024-07-15 20:43:29.189771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:23920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.189787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.200457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e73e0 00:29:37.012 [2024-07-15 20:43:29.201527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.201549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.212255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6300 00:29:37.012 [2024-07-15 20:43:29.213305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:24378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.213320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.224053] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5220 00:29:37.012 [2024-07-15 20:43:29.225143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:14040 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.225160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.235857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.012 [2024-07-15 20:43:29.236909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:12858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.236925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.247643] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3060 00:29:37.012 [2024-07-15 20:43:29.248727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.248744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.259442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1f80 00:29:37.012 [2024-07-15 20:43:29.260503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.260519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.271209] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0ea0 00:29:37.012 [2024-07-15 20:43:29.272290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.272308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.282998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f57b0 00:29:37.012 [2024-07-15 20:43:29.284078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.284095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.294783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6890 00:29:37.012 [2024-07-15 20:43:29.295910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.295926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.306640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7970 00:29:37.012 [2024-07-15 20:43:29.307742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.307758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.318438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8a50 00:29:37.012 [2024-07-15 20:43:29.319506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.319522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.330193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9b30 00:29:37.012 [2024-07-15 20:43:29.331282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:10377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.331298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.341983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fac10 00:29:37.012 [2024-07-15 20:43:29.343063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:22282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.343080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.353788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f31b8 00:29:37.012 [2024-07-15 20:43:29.354892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:16472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.354908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.365648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190ea248 00:29:37.012 [2024-07-15 20:43:29.366730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.366746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.377434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e9168 00:29:37.012 [2024-07-15 20:43:29.378518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:22218 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.378535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.012 [2024-07-15 20:43:29.389207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e8088 00:29:37.012 [2024-07-15 20:43:29.390301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.012 [2024-07-15 20:43:29.390317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.273 [2024-07-15 20:43:29.400998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6fa8 00:29:37.273 [2024-07-15 20:43:29.402062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:18127 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.273 [2024-07-15 20:43:29.402078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.273 [2024-07-15 20:43:29.412788] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5ec8 00:29:37.273 [2024-07-15 20:43:29.413872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.273 [2024-07-15 20:43:29.413890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.273 [2024-07-15 20:43:29.424690] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4de8 00:29:37.273 [2024-07-15 20:43:29.425781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6911 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.273 [2024-07-15 20:43:29.425798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.273 [2024-07-15 20:43:29.436503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3d08 00:29:37.273 [2024-07-15 20:43:29.437582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:11383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.273 [2024-07-15 20:43:29.437599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.273 [2024-07-15 20:43:29.448288] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e2c28 00:29:37.273 [2024-07-15 20:43:29.449343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:19782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.273 [2024-07-15 20:43:29.449359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.273 [2024-07-15 20:43:29.460074] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1b48 00:29:37.273 [2024-07-15 20:43:29.461167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.273 [2024-07-15 20:43:29.461183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.273 [2024-07-15 20:43:29.471871] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0a68 00:29:37.274 [2024-07-15 20:43:29.472958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.472974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.483679] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f5be8 00:29:37.274 [2024-07-15 20:43:29.484762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:2609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.484777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.495494] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6cc8 00:29:37.274 [2024-07-15 20:43:29.496580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.496597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.507289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7da8 00:29:37.274 [2024-07-15 20:43:29.508327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:19594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.508347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.519076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8e88 00:29:37.274 [2024-07-15 20:43:29.520161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8070 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.520177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.530878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9f68 00:29:37.274 [2024-07-15 20:43:29.531918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.531935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.542773] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fb048 00:29:37.274 [2024-07-15 20:43:29.543877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:7513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.543893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.554591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f2d80 00:29:37.274 [2024-07-15 20:43:29.555686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:3998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.555702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.566387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e95a0 00:29:37.274 [2024-07-15 20:43:29.567487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.567503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.578156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e84c0 00:29:37.274 [2024-07-15 20:43:29.579237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:12263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.579253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.589956] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e73e0 00:29:37.274 [2024-07-15 20:43:29.591045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:3873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.591062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.601740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6300 00:29:37.274 [2024-07-15 20:43:29.602826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:6355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.602843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.613553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5220 00:29:37.274 [2024-07-15 20:43:29.614655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:19213 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.614672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.625349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.274 [2024-07-15 20:43:29.626429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:22677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.626445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.637138] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3060 00:29:37.274 [2024-07-15 20:43:29.638228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:4413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.638247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.274 [2024-07-15 20:43:29.648921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1f80 00:29:37.274 [2024-07-15 20:43:29.650009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:8369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.274 [2024-07-15 20:43:29.650025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.535 [2024-07-15 20:43:29.660719] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0ea0 00:29:37.535 [2024-07-15 20:43:29.661816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19106 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.535 [2024-07-15 20:43:29.661833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.535 [2024-07-15 20:43:29.672509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f57b0 00:29:37.535 [2024-07-15 20:43:29.673587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.535 [2024-07-15 20:43:29.673603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.535 [2024-07-15 20:43:29.684306] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f6890 00:29:37.535 [2024-07-15 20:43:29.685381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:22878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.535 [2024-07-15 20:43:29.685397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.535 [2024-07-15 20:43:29.696080] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f7970 00:29:37.535 [2024-07-15 20:43:29.697166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:6325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.535 [2024-07-15 20:43:29.697182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.535 [2024-07-15 20:43:29.707852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f8a50 00:29:37.535 [2024-07-15 20:43:29.708934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:12668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.535 [2024-07-15 20:43:29.708950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.535 [2024-07-15 20:43:29.719657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f9b30 00:29:37.535 [2024-07-15 20:43:29.720741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.535 [2024-07-15 20:43:29.720758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.535 [2024-07-15 20:43:29.731450] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fac10 00:29:37.536 [2024-07-15 20:43:29.732515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.732531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.743246] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f31b8 00:29:37.536 [2024-07-15 20:43:29.744289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:5446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.744305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.755057] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190ea248 00:29:37.536 [2024-07-15 20:43:29.756142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.756159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.766853] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e9168 00:29:37.536 [2024-07-15 20:43:29.767931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:8908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.767947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.778654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e8088 00:29:37.536 [2024-07-15 20:43:29.779742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11662 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.779758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.790428] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6fa8 00:29:37.536 [2024-07-15 20:43:29.791509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:14270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.791526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.802251] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e5ec8 00:29:37.536 [2024-07-15 20:43:29.803302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:21954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.803318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.814032] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4de8 00:29:37.536 [2024-07-15 20:43:29.815120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:16905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.815136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.825861] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e3d08 00:29:37.536 [2024-07-15 20:43:29.826949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.826965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.837678] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e2c28 00:29:37.536 [2024-07-15 20:43:29.838756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4199 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.838773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.849469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1b48 00:29:37.536 [2024-07-15 20:43:29.850555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:24744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.850572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.862899] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e0a68 00:29:37.536 [2024-07-15 20:43:29.864624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.864641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.873556] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190f92c0 00:29:37.536 [2024-07-15 20:43:29.874790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:24029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.874807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.887030] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e1f80 00:29:37.536 [2024-07-15 20:43:29.888879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:12410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.888895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.897681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e6738 00:29:37.536 [2024-07-15 20:43:29.899088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:4553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.899104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:37.536 [2024-07-15 20:43:29.911132] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190fd208 00:29:37.536 [2024-07-15 20:43:29.913172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.536 [2024-07-15 20:43:29.913189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:29.921775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e49b0 00:29:37.798 [2024-07-15 20:43:29.923292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:20360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:29.923311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:29.931413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:29.932313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:29.932330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:29.943187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:29.944090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:29.944106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:29.954941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:29.955791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:22864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:29.955807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:29.966700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:29.967601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:22390 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:29.967618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:29.978462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:29.979340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17524 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:29.979357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:29.990225] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:29.991148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:29.991164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.002495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.003366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25395 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.003384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.014297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.015154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:3431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.015171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.026059] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.026925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:18814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.026941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.037914] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.038826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.038842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.049699] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.050601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:17867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.050617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.061500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.062403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:13959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.062420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.073325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.074239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:19512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.074255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.085088] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.085985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.086002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.096858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.097705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:4298 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.097722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.108648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.109565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:1908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.109581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.120437] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.121302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:4236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.121318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.132218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.133122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:10579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.133138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.143972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.144882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.144898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.155942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.156843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22075 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.156859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:37.798 [2024-07-15 20:43:30.167697] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:37.798 [2024-07-15 20:43:30.168598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:17235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:37.798 [2024-07-15 20:43:30.168614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 [2024-07-15 20:43:30.179469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:38.060 [2024-07-15 20:43:30.180345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:17571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.060 [2024-07-15 20:43:30.180361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 [2024-07-15 20:43:30.191226] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:38.060 [2024-07-15 20:43:30.192130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:25202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.060 [2024-07-15 20:43:30.192145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 [2024-07-15 20:43:30.202997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:38.060 [2024-07-15 20:43:30.203898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.060 [2024-07-15 20:43:30.203914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 [2024-07-15 20:43:30.214747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:38.060 [2024-07-15 20:43:30.215656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.060 [2024-07-15 20:43:30.215672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 [2024-07-15 20:43:30.226526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:38.060 [2024-07-15 20:43:30.227424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12596 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.060 [2024-07-15 20:43:30.227443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 [2024-07-15 20:43:30.238279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:38.060 [2024-07-15 20:43:30.239172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:17966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.060 [2024-07-15 20:43:30.239188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 [2024-07-15 20:43:30.250045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742ac0) with pdu=0x2000190e4140 00:29:38.060 [2024-07-15 20:43:30.250942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:38.060 [2024-07-15 20:43:30.250958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:38.060 00:29:38.060 Latency(us) 00:29:38.060 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.060 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.060 nvme0n1 : 2.01 21614.99 84.43 0.00 0.00 5912.34 2239.15 14199.47 00:29:38.060 =================================================================================================================== 00:29:38.060 Total : 21614.99 84.43 0.00 0.00 5912.34 2239.15 14199.47 00:29:38.060 0 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:38.060 | .driver_specific 00:29:38.060 | .nvme_error 00:29:38.060 | .status_code 00:29:38.060 | .command_transient_transport_error' 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 170 > 0 )) 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1524739 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1524739 ']' 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1524739 00:29:38.060 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1524739 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1524739' 00:29:38.321 killing process with pid 1524739 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1524739 00:29:38.321 Received shutdown signal, test time was about 2.000000 seconds 00:29:38.321 00:29:38.321 Latency(us) 00:29:38.321 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:38.321 =================================================================================================================== 00:29:38.321 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1524739 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1525505 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1525505 /var/tmp/bperf.sock 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1525505 ']' 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.321 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:38.322 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.322 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:38.322 20:43:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.322 [2024-07-15 20:43:30.664443] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:38.322 [2024-07-15 20:43:30.664528] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1525505 ] 00:29:38.322 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:38.322 Zero copy mechanism will not be used. 00:29:38.322 EAL: No free 2048 kB hugepages reported on node 1 00:29:38.581 [2024-07-15 20:43:30.743713] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.581 [2024-07-15 20:43:30.797300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:39.152 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:39.152 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:29:39.152 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.152 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:39.412 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:39.412 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.412 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.412 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.412 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.413 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:39.673 nvme0n1 00:29:39.673 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:39.673 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:39.673 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:39.673 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:39.673 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:39.673 20:43:31 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:39.673 Zero copy mechanism will not be used. 00:29:39.673 Running I/O for 2 seconds... 00:29:39.934 [2024-07-15 20:43:32.059669] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.934 [2024-07-15 20:43:32.060068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-07-15 20:43:32.060094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.934 [2024-07-15 20:43:32.071253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.934 [2024-07-15 20:43:32.071594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-07-15 20:43:32.071615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.934 [2024-07-15 20:43:32.081842] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.934 [2024-07-15 20:43:32.082211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-07-15 20:43:32.082234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.934 [2024-07-15 20:43:32.092009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.934 [2024-07-15 20:43:32.092365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.934 [2024-07-15 20:43:32.092383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.934 [2024-07-15 20:43:32.099545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.934 [2024-07-15 20:43:32.099773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.099790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.105759] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.106097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.106115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.112486] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.112599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.112616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.121545] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.121758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.121775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.127379] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.127590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.127607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.132144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.132360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.132377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.137742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.138063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.138080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.143760] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.143969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.143985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.149747] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.149956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.149972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.154284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.154599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.154617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.160676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.160888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.160904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.167014] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.167223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.167247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.174972] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.175184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.175201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.183536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.183869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.183886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.190406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.190731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.190748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.199417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.199751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.199768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.208310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.208667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.208684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.216350] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.216658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.216675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.224337] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.224563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.224579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.232811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.233120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.233137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.238840] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.239177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.239194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.245075] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.245293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.245309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.250518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.250740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.250757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.256296] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.256515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.256531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.263049] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.263389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.263405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.269934] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.270146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.270161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.277003] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.277323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.277340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.287430] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.287735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.287752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.297207] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.297286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.297302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.935 [2024-07-15 20:43:32.307537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:39.935 [2024-07-15 20:43:32.307867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:39.935 [2024-07-15 20:43:32.307884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.318242] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.318553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.318571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.328567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.328913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.328929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.339144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.339471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.339489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.350076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.350412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.350429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.359560] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.359676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.359691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.370180] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.370518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.370535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.380275] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.380510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.380527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.389824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.390200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.390220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.399625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.399992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.400009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.408828] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.409199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.409216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.418611] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.418847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.418864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.427795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.428003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.428019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.435687] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.435924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.435942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.445018] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.445372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.445389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.454998] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.455259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.455277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.464848] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.465180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.465197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.474101] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.474332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.474350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.482381] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.482587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.482604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.489864] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.490216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.197 [2024-07-15 20:43:32.490239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.197 [2024-07-15 20:43:32.496524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.197 [2024-07-15 20:43:32.496728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.496744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.502309] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.502510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.502527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.509024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.509324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.509341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.518252] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.518627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.518645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.526673] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.527067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.527085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.531487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.531802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.531819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.539706] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.540043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.540061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.545600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.545889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.545906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.552524] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.552724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.552741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.557244] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.557441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.557457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.561371] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.561569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.561585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.565819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.566014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.566030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.570506] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.570700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.570717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.198 [2024-07-15 20:43:32.574357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.198 [2024-07-15 20:43:32.574539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.198 [2024-07-15 20:43:32.574555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.578765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.578949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.578968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.583017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.583201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.583217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.586737] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.586919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.586936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.593125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.593388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.593406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.600103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.600291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.600307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.608517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.608802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.608819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.619692] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.620034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.620050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.630817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.631182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.631199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.641878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.642163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.642180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.651327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.651754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.651771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.663130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.663516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.663533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.674448] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.674799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.674816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.685186] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.685646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.685664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.696547] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.696981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.696998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.707228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.707763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.707780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.718596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.718914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.718931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.729301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.729713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.729730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.740439] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.740697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.740715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.751372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.751482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.751498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.762372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.762717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.762734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.774389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.774748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.774765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.785858] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.786100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.786117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.796028] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.796239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.796255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.805644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.805941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.805958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.814147] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.814519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.814536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.824907] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.825220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.825241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.460 [2024-07-15 20:43:32.834137] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.460 [2024-07-15 20:43:32.834431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.460 [2024-07-15 20:43:32.834452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.722 [2024-07-15 20:43:32.842573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.722 [2024-07-15 20:43:32.842779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.722 [2024-07-15 20:43:32.842795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.722 [2024-07-15 20:43:32.851224] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.722 [2024-07-15 20:43:32.851457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.722 [2024-07-15 20:43:32.851472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.722 [2024-07-15 20:43:32.858847] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.722 [2024-07-15 20:43:32.859251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.722 [2024-07-15 20:43:32.859268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.722 [2024-07-15 20:43:32.866212] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.722 [2024-07-15 20:43:32.866527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.722 [2024-07-15 20:43:32.866545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.722 [2024-07-15 20:43:32.872406] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.722 [2024-07-15 20:43:32.872780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.872797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.881957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.882291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.882308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.890536] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.890873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.890890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.897210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.897505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.897522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.905590] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.905944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.905962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.914326] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.914671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.914688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.924421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.924747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.924764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.933702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.934091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.934109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.943055] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.943153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.943168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.953367] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.953689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.953706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.963310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.963508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.963524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.972993] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.973387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.973405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.981274] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.981620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.981638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:32.991213] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:32.991583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:32.991600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.001659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.002080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.002097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.013064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.013436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.013453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.024780] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.025116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.025134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.036619] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.036841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.036858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.048197] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.048591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.048608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.059977] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.060298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.060314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.072163] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.072431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.072449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.083983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.084338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.084358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.723 [2024-07-15 20:43:33.095730] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.723 [2024-07-15 20:43:33.096071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.723 [2024-07-15 20:43:33.096088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.107609] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.108080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.108097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.119532] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.119894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.119911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.131238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.131624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.131642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.143307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.143511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.143527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.154903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.155080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.155095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.166810] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.167178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.167196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.179206] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.179583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.179600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.189433] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.189880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.189897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.200255] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.200533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.200550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.211573] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.211834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.211851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.220750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.220995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.221011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.230589] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.230896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.230913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.241794] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.242170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.242187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.249797] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.250006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.250022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.257557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.257897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.257914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.268153] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.268579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.268597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.278911] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.279256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.279273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.290210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.290431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.290448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.301968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.302273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.302290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.313943] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.314509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.985 [2024-07-15 20:43:33.314526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:40.985 [2024-07-15 20:43:33.325757] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.985 [2024-07-15 20:43:33.326210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.986 [2024-07-15 20:43:33.326227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.986 [2024-07-15 20:43:33.335795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.986 [2024-07-15 20:43:33.336307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.986 [2024-07-15 20:43:33.336324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:40.986 [2024-07-15 20:43:33.346469] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.986 [2024-07-15 20:43:33.346838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.986 [2024-07-15 20:43:33.346855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:40.986 [2024-07-15 20:43:33.357562] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:40.986 [2024-07-15 20:43:33.357846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:40.986 [2024-07-15 20:43:33.357864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.369707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.370023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.370047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.380816] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.381315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.381332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.392356] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.392617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.392635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.402154] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.402388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.402404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.411835] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.412111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.412128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.418394] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.418608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.418624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.425957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.426295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.426311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.434639] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.435014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.435031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.442072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.442277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.442294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.450247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.450585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.450603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.456314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.456680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.456697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.461594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.461783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.461799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.467438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.467768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.467785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.473348] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.473537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.247 [2024-07-15 20:43:33.473553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.247 [2024-07-15 20:43:33.478808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.247 [2024-07-15 20:43:33.478996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.479012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.485215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.485643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.493311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.493636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.493653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.500500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.500773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.500790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.507064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.507259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.507275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.514713] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.514997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.515014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.520917] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.521253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.521270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.529841] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.530184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.530202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.538857] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.539259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.539277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.545505] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.545694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.545711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.553659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.554028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.554045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.559729] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.559919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.559935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.564503] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.564694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.564712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.571886] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.572211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.572228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.576808] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.576996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.577013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.583945] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.584143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.584160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.588535] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.588886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.588904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.596715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.596998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.597015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.604160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.604515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.604532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.614247] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.614564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.614582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.248 [2024-07-15 20:43:33.624193] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.248 [2024-07-15 20:43:33.624639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.248 [2024-07-15 20:43:33.624656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.635653] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.636031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.636049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.646753] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.647094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.647111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.657258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.657696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.657713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.667636] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.668097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.668114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.678940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.679226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.679250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.690342] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.690791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.690808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.701429] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.701735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.701752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.712597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.712803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.712819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.509 [2024-07-15 20:43:33.724082] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.509 [2024-07-15 20:43:33.724542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.509 [2024-07-15 20:43:33.724562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.735629] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.735936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.735953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.746325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.746643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.746661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.756413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.756739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.756756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.767618] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.768102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.768119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.779156] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.779468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.779485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.790415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.790899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.790916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.802996] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.803470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.803487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.814289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.814540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.814557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.825191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.825581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.825599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.836125] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.836477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.836494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.846160] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.846513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.846531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.856605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.856950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.856967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.867218] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.867698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.867716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.510 [2024-07-15 20:43:33.878364] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.510 [2024-07-15 20:43:33.878676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.510 [2024-07-15 20:43:33.878694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.888726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.888965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.888981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.899222] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.899508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.899526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.908518] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.908802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.908819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.918554] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.918928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.918946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.928549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.928897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.928914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.937656] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.937981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.937998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.946473] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.946668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.946685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.956825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.957173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.957191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.966103] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.966437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.966455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.976169] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.771 [2024-07-15 20:43:33.976496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.771 [2024-07-15 20:43:33.976513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.771 [2024-07-15 20:43:33.985531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:33.985858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:33.985876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.772 [2024-07-15 20:43:33.994888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:33.995227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:33.995252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.772 [2024-07-15 20:43:34.003844] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:34.004201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:34.004218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.772 [2024-07-15 20:43:34.014150] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:34.014475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:34.014492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.772 [2024-07-15 20:43:34.023299] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:34.023515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:34.023532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:41.772 [2024-07-15 20:43:34.031786] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:34.032101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:34.032118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:41.772 [2024-07-15 20:43:34.042511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:34.042707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:34.042723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:41.772 [2024-07-15 20:43:34.053268] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x742bf0) with pdu=0x2000190fef90 00:29:41.772 [2024-07-15 20:43:34.053502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:41.772 [2024-07-15 20:43:34.053518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:41.772 00:29:41.772 Latency(us) 00:29:41.772 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.772 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:41.772 nvme0n1 : 2.00 3420.48 427.56 0.00 0.00 4669.38 1802.24 12288.00 00:29:41.772 =================================================================================================================== 00:29:41.772 Total : 3420.48 427.56 0.00 0.00 4669.38 1802.24 12288.00 00:29:41.772 0 00:29:41.772 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:41.772 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:41.772 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:41.772 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:41.772 | .driver_specific 00:29:41.772 | .nvme_error 00:29:41.772 | .status_code 00:29:41.772 | .command_transient_transport_error' 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 221 > 0 )) 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1525505 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1525505 ']' 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1525505 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1525505 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1525505' 00:29:42.031 killing process with pid 1525505 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1525505 00:29:42.031 Received shutdown signal, test time was about 2.000000 seconds 00:29:42.031 00:29:42.031 Latency(us) 00:29:42.031 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.031 =================================================================================================================== 00:29:42.031 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1525505 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1523102 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1523102 ']' 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1523102 00:29:42.031 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1523102 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1523102' 00:29:42.291 killing process with pid 1523102 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1523102 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1523102 00:29:42.291 00:29:42.291 real 0m16.123s 00:29:42.291 user 0m31.605s 00:29:42.291 sys 0m3.329s 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.291 ************************************ 00:29:42.291 END TEST nvmf_digest_error 00:29:42.291 ************************************ 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:42.291 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:42.291 rmmod nvme_tcp 00:29:42.551 rmmod nvme_fabrics 00:29:42.551 rmmod nvme_keyring 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1523102 ']' 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1523102 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1523102 ']' 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1523102 00:29:42.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1523102) - No such process 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1523102 is not found' 00:29:42.551 Process with pid 1523102 is not found 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:42.551 20:43:34 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.464 20:43:36 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:44.464 00:29:44.464 real 0m43.201s 00:29:44.464 user 1m6.000s 00:29:44.464 sys 0m12.870s 00:29:44.464 20:43:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:44.464 20:43:36 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:44.464 ************************************ 00:29:44.464 END TEST nvmf_digest 00:29:44.464 ************************************ 00:29:44.464 20:43:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:29:44.464 20:43:36 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:29:44.464 20:43:36 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:29:44.464 20:43:36 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:29:44.464 20:43:36 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:44.464 20:43:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:29:44.464 20:43:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:44.464 20:43:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:44.724 ************************************ 00:29:44.724 START TEST nvmf_bdevperf 00:29:44.724 ************************************ 00:29:44.724 20:43:36 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:44.724 * Looking for test storage... 00:29:44.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:44.725 20:43:36 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:29:44.725 20:43:37 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:29:52.865 Found 0000:31:00.0 (0x8086 - 0x159b) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:29:52.865 Found 0000:31:00.1 (0x8086 - 0x159b) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:29:52.865 Found net devices under 0000:31:00.0: cvl_0_0 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:52.865 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:29:52.866 Found net devices under 0000:31:00.1: cvl_0_1 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:52.866 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:52.866 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.739 ms 00:29:52.866 00:29:52.866 --- 10.0.0.2 ping statistics --- 00:29:52.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.866 rtt min/avg/max/mdev = 0.739/0.739/0.739/0.000 ms 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:52.866 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:52.866 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:29:52.866 00:29:52.866 --- 10.0.0.1 ping statistics --- 00:29:52.866 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:52.866 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1530759 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1530759 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1530759 ']' 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:52.866 20:43:44 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:52.866 [2024-07-15 20:43:44.701197] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:52.866 [2024-07-15 20:43:44.701272] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.866 EAL: No free 2048 kB hugepages reported on node 1 00:29:52.866 [2024-07-15 20:43:44.801220] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:52.866 [2024-07-15 20:43:44.896294] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.866 [2024-07-15 20:43:44.896352] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.866 [2024-07-15 20:43:44.896361] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.866 [2024-07-15 20:43:44.896368] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.866 [2024-07-15 20:43:44.896374] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.866 [2024-07-15 20:43:44.896509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:52.866 [2024-07-15 20:43:44.896672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.866 [2024-07-15 20:43:44.896673] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:53.126 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:53.126 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:29:53.126 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:53.126 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:53.126 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:53.386 [2024-07-15 20:43:45.529828] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:53.386 Malloc0 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:53.386 [2024-07-15 20:43:45.598763] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:53.386 { 00:29:53.386 "params": { 00:29:53.386 "name": "Nvme$subsystem", 00:29:53.386 "trtype": "$TEST_TRANSPORT", 00:29:53.386 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:53.386 "adrfam": "ipv4", 00:29:53.386 "trsvcid": "$NVMF_PORT", 00:29:53.386 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:53.386 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:53.386 "hdgst": ${hdgst:-false}, 00:29:53.386 "ddgst": ${ddgst:-false} 00:29:53.386 }, 00:29:53.386 "method": "bdev_nvme_attach_controller" 00:29:53.386 } 00:29:53.386 EOF 00:29:53.386 )") 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:53.386 20:43:45 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:53.386 "params": { 00:29:53.386 "name": "Nvme1", 00:29:53.386 "trtype": "tcp", 00:29:53.386 "traddr": "10.0.0.2", 00:29:53.386 "adrfam": "ipv4", 00:29:53.386 "trsvcid": "4420", 00:29:53.386 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:53.386 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:53.386 "hdgst": false, 00:29:53.386 "ddgst": false 00:29:53.386 }, 00:29:53.386 "method": "bdev_nvme_attach_controller" 00:29:53.386 }' 00:29:53.386 [2024-07-15 20:43:45.661252] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:53.386 [2024-07-15 20:43:45.661305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1530903 ] 00:29:53.386 EAL: No free 2048 kB hugepages reported on node 1 00:29:53.386 [2024-07-15 20:43:45.725871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:53.646 [2024-07-15 20:43:45.790690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:53.646 Running I/O for 1 seconds... 00:29:54.585 00:29:54.585 Latency(us) 00:29:54.585 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:54.585 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:54.585 Verification LBA range: start 0x0 length 0x4000 00:29:54.585 Nvme1n1 : 1.01 9188.95 35.89 0.00 0.00 13867.78 2607.79 14090.24 00:29:54.585 =================================================================================================================== 00:29:54.585 Total : 9188.95 35.89 0.00 0.00 13867.78 2607.79 14090.24 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1531240 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:54.846 { 00:29:54.846 "params": { 00:29:54.846 "name": "Nvme$subsystem", 00:29:54.846 "trtype": "$TEST_TRANSPORT", 00:29:54.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.846 "adrfam": "ipv4", 00:29:54.846 "trsvcid": "$NVMF_PORT", 00:29:54.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.846 "hdgst": ${hdgst:-false}, 00:29:54.846 "ddgst": ${ddgst:-false} 00:29:54.846 }, 00:29:54.846 "method": "bdev_nvme_attach_controller" 00:29:54.846 } 00:29:54.846 EOF 00:29:54.846 )") 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:29:54.846 20:43:47 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:54.846 "params": { 00:29:54.846 "name": "Nvme1", 00:29:54.846 "trtype": "tcp", 00:29:54.846 "traddr": "10.0.0.2", 00:29:54.846 "adrfam": "ipv4", 00:29:54.846 "trsvcid": "4420", 00:29:54.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.846 "hdgst": false, 00:29:54.846 "ddgst": false 00:29:54.846 }, 00:29:54.846 "method": "bdev_nvme_attach_controller" 00:29:54.846 }' 00:29:54.846 [2024-07-15 20:43:47.126023] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:29:54.846 [2024-07-15 20:43:47.126079] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1531240 ] 00:29:54.846 EAL: No free 2048 kB hugepages reported on node 1 00:29:54.846 [2024-07-15 20:43:47.192090] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.106 [2024-07-15 20:43:47.255202] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.367 Running I/O for 15 seconds... 00:29:57.927 20:43:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1530759 00:29:57.927 20:43:50 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:57.927 [2024-07-15 20:43:50.099922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:57.927 [2024-07-15 20:43:50.099970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.927 [2024-07-15 20:43:50.099994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.927 [2024-07-15 20:43:50.100004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:90984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:91000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:91008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:91016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:91024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:91032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:91040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:91048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:91056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:91064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:91072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:91080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:91088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:91096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:91104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:91112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:91120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:91128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:91136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:91144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:91152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:91160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:91168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:91176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:91184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:91192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:91200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:91208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:91216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:91224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:91232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:91240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:91248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:91256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:91264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:91272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:91280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:91288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.928 [2024-07-15 20:43:50.100848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:91296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.928 [2024-07-15 20:43:50.100858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.100872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:91304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.100886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.100898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:91312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.100911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.100926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:91320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.100937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.100950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:91328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.100961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.100976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:91336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.100986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:91344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:91352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:91360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:91368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:91376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:91384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:91392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:91400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:91408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:91416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:91424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:91432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:91440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:91448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:91456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:91464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:91472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:91480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:91488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:91496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:91504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:91512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:91520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:91528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:91536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:91544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:91552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:91560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:91568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:91576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:91584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:91592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:91600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.929 [2024-07-15 20:43:50.101560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:91608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.929 [2024-07-15 20:43:50.101567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:91616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:91624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:91632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:91640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:91648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:91656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:91664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:91672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:91680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:91688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:91696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:91704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:91712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:91720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:91728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:91736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:91744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:91752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:91760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:91768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:91776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:91792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:91800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.101984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:91808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.101991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:91816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:91824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:91832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:91840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:91848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:91856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:91864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:91872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:91880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:91888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:91896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:91904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:91912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:91920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:91928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.930 [2024-07-15 20:43:50.102246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.930 [2024-07-15 20:43:50.102256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:91936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.931 [2024-07-15 20:43:50.102263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.931 [2024-07-15 20:43:50.102274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:91944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.931 [2024-07-15 20:43:50.102281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.931 [2024-07-15 20:43:50.102291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:91952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.931 [2024-07-15 20:43:50.102298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.931 [2024-07-15 20:43:50.102307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:91960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.931 [2024-07-15 20:43:50.102314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.931 [2024-07-15 20:43:50.102323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:91968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.931 [2024-07-15 20:43:50.102330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.931 [2024-07-15 20:43:50.102341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:91976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:57.931 [2024-07-15 20:43:50.102349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.931 [2024-07-15 20:43:50.102358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x26ce430 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.102367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:57.931 [2024-07-15 20:43:50.102373] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:57.931 [2024-07-15 20:43:50.102380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:91984 len:8 PRP1 0x0 PRP2 0x0 00:29:57.931 [2024-07-15 20:43:50.102387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:57.931 [2024-07-15 20:43:50.102424] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x26ce430 was disconnected and freed. reset controller. 00:29:57.931 [2024-07-15 20:43:50.105921] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.105971] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.106618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.106637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.106645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.106868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.107088] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.931 [2024-07-15 20:43:50.107097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.931 [2024-07-15 20:43:50.107105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.931 [2024-07-15 20:43:50.110672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.931 [2024-07-15 20:43:50.120111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.120592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.120609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.120617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.120838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.121058] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.931 [2024-07-15 20:43:50.121067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.931 [2024-07-15 20:43:50.121075] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.931 [2024-07-15 20:43:50.124632] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.931 [2024-07-15 20:43:50.134075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.134639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.134656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.134663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.134883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.135103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.931 [2024-07-15 20:43:50.135112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.931 [2024-07-15 20:43:50.135119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.931 [2024-07-15 20:43:50.138681] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.931 [2024-07-15 20:43:50.147912] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.148551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.148590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.148600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.148842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.149066] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.931 [2024-07-15 20:43:50.149076] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.931 [2024-07-15 20:43:50.149084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.931 [2024-07-15 20:43:50.152872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.931 [2024-07-15 20:43:50.161913] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.162531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.162551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.162559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.162780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.163005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.931 [2024-07-15 20:43:50.163014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.931 [2024-07-15 20:43:50.163021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.931 [2024-07-15 20:43:50.166577] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.931 [2024-07-15 20:43:50.175806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.176502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.176540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.176550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.176791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.177015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.931 [2024-07-15 20:43:50.177024] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.931 [2024-07-15 20:43:50.177032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.931 [2024-07-15 20:43:50.180598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.931 [2024-07-15 20:43:50.189624] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.190290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.190328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.190341] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.190582] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.190806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.931 [2024-07-15 20:43:50.190816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.931 [2024-07-15 20:43:50.190824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.931 [2024-07-15 20:43:50.194390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.931 [2024-07-15 20:43:50.203615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.931 [2024-07-15 20:43:50.204320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.931 [2024-07-15 20:43:50.204358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.931 [2024-07-15 20:43:50.204371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.931 [2024-07-15 20:43:50.204614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.931 [2024-07-15 20:43:50.204838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.932 [2024-07-15 20:43:50.204848] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.932 [2024-07-15 20:43:50.204855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.932 [2024-07-15 20:43:50.208421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.932 [2024-07-15 20:43:50.217426] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.932 [2024-07-15 20:43:50.218091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.932 [2024-07-15 20:43:50.218129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.932 [2024-07-15 20:43:50.218140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.932 [2024-07-15 20:43:50.218390] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.932 [2024-07-15 20:43:50.218616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.932 [2024-07-15 20:43:50.218625] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.932 [2024-07-15 20:43:50.218632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.932 [2024-07-15 20:43:50.222185] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.932 [2024-07-15 20:43:50.231411] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.932 [2024-07-15 20:43:50.232095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.932 [2024-07-15 20:43:50.232133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.932 [2024-07-15 20:43:50.232144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.932 [2024-07-15 20:43:50.232394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.932 [2024-07-15 20:43:50.232618] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.932 [2024-07-15 20:43:50.232628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.932 [2024-07-15 20:43:50.232636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.932 [2024-07-15 20:43:50.236190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.932 [2024-07-15 20:43:50.245412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.932 [2024-07-15 20:43:50.246101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.932 [2024-07-15 20:43:50.246138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.932 [2024-07-15 20:43:50.246149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.932 [2024-07-15 20:43:50.246400] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.932 [2024-07-15 20:43:50.246625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.932 [2024-07-15 20:43:50.246634] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.932 [2024-07-15 20:43:50.246642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.932 [2024-07-15 20:43:50.250197] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.932 [2024-07-15 20:43:50.259412] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.932 [2024-07-15 20:43:50.260061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.932 [2024-07-15 20:43:50.260098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.932 [2024-07-15 20:43:50.260113] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.932 [2024-07-15 20:43:50.260364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.932 [2024-07-15 20:43:50.260588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.932 [2024-07-15 20:43:50.260598] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.932 [2024-07-15 20:43:50.260605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.932 [2024-07-15 20:43:50.264160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.932 [2024-07-15 20:43:50.273393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.932 [2024-07-15 20:43:50.274041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.932 [2024-07-15 20:43:50.274079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.932 [2024-07-15 20:43:50.274089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.932 [2024-07-15 20:43:50.274339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.932 [2024-07-15 20:43:50.274565] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.932 [2024-07-15 20:43:50.274574] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.932 [2024-07-15 20:43:50.274582] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.932 [2024-07-15 20:43:50.278137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:57.932 [2024-07-15 20:43:50.287367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:57.932 [2024-07-15 20:43:50.288073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:57.932 [2024-07-15 20:43:50.288111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:57.932 [2024-07-15 20:43:50.288121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:57.932 [2024-07-15 20:43:50.288372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:57.932 [2024-07-15 20:43:50.288597] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:57.932 [2024-07-15 20:43:50.288607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:57.932 [2024-07-15 20:43:50.288615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:57.932 [2024-07-15 20:43:50.292172] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.195 [2024-07-15 20:43:50.301187] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.195 [2024-07-15 20:43:50.301764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.195 [2024-07-15 20:43:50.301782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.195 [2024-07-15 20:43:50.301790] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.195 [2024-07-15 20:43:50.302010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.195 [2024-07-15 20:43:50.302244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.195 [2024-07-15 20:43:50.302254] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.195 [2024-07-15 20:43:50.302261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.195 [2024-07-15 20:43:50.305814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.195 [2024-07-15 20:43:50.315032] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.195 [2024-07-15 20:43:50.315643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.195 [2024-07-15 20:43:50.315659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.195 [2024-07-15 20:43:50.315667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.195 [2024-07-15 20:43:50.315887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.195 [2024-07-15 20:43:50.316106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.195 [2024-07-15 20:43:50.316115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.195 [2024-07-15 20:43:50.316122] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.195 [2024-07-15 20:43:50.319677] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.195 [2024-07-15 20:43:50.328888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.195 [2024-07-15 20:43:50.329440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.195 [2024-07-15 20:43:50.329457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.195 [2024-07-15 20:43:50.329464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.195 [2024-07-15 20:43:50.329684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.195 [2024-07-15 20:43:50.329904] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.195 [2024-07-15 20:43:50.329913] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.195 [2024-07-15 20:43:50.329920] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.195 [2024-07-15 20:43:50.333481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.195 [2024-07-15 20:43:50.342690] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.195 [2024-07-15 20:43:50.343338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.195 [2024-07-15 20:43:50.343377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.195 [2024-07-15 20:43:50.343387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.195 [2024-07-15 20:43:50.343627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.195 [2024-07-15 20:43:50.343851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.195 [2024-07-15 20:43:50.343861] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.195 [2024-07-15 20:43:50.343868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.195 [2024-07-15 20:43:50.347439] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.195 [2024-07-15 20:43:50.356667] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.195 [2024-07-15 20:43:50.357337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.195 [2024-07-15 20:43:50.357376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.195 [2024-07-15 20:43:50.357388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.195 [2024-07-15 20:43:50.357630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.195 [2024-07-15 20:43:50.357854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.195 [2024-07-15 20:43:50.357865] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.195 [2024-07-15 20:43:50.357873] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.195 [2024-07-15 20:43:50.361443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.195 [2024-07-15 20:43:50.370673] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.195 [2024-07-15 20:43:50.371333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.195 [2024-07-15 20:43:50.371372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.195 [2024-07-15 20:43:50.371384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.195 [2024-07-15 20:43:50.371627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.195 [2024-07-15 20:43:50.371851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.195 [2024-07-15 20:43:50.371860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.195 [2024-07-15 20:43:50.371868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.195 [2024-07-15 20:43:50.375436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.195 [2024-07-15 20:43:50.384666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.385277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.385303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.385311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.385537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.385758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.385767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.385774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.389333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.398545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.399270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.399309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.399324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.399564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.399788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.399797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.399805] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.403366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.412367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.413072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.413109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.413120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.413370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.413595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.413606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.413614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.417169] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.426176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.426884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.426922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.426933] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.427173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.427407] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.427417] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.427425] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.430989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.439997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.440665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.440703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.440714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.440953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.441178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.441191] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.441199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.444763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.453981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.454693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.454731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.454742] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.454982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.455207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.455216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.455224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.458791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.467794] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.468491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.468530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.468541] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.468780] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.469004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.469014] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.469021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.472589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.481601] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.482304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.482341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.482352] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.482592] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.482816] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.482825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.482833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.486408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.495418] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.496123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.496161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.496172] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.496423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.496647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.496656] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.496664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.500219] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.509222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.509935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.509973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.509983] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.510223] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.510459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.510469] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.510477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.196 [2024-07-15 20:43:50.514031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.196 [2024-07-15 20:43:50.523050] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.196 [2024-07-15 20:43:50.523628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.196 [2024-07-15 20:43:50.523665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.196 [2024-07-15 20:43:50.523675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.196 [2024-07-15 20:43:50.523915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.196 [2024-07-15 20:43:50.524138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.196 [2024-07-15 20:43:50.524148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.196 [2024-07-15 20:43:50.524155] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.197 [2024-07-15 20:43:50.527721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.197 [2024-07-15 20:43:50.536956] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.197 [2024-07-15 20:43:50.537640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.197 [2024-07-15 20:43:50.537678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.197 [2024-07-15 20:43:50.537689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.197 [2024-07-15 20:43:50.537933] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.197 [2024-07-15 20:43:50.538158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.197 [2024-07-15 20:43:50.538167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.197 [2024-07-15 20:43:50.538175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.197 [2024-07-15 20:43:50.541744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.197 [2024-07-15 20:43:50.550760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.197 [2024-07-15 20:43:50.551430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.197 [2024-07-15 20:43:50.551468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.197 [2024-07-15 20:43:50.551479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.197 [2024-07-15 20:43:50.551719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.197 [2024-07-15 20:43:50.551943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.197 [2024-07-15 20:43:50.551953] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.197 [2024-07-15 20:43:50.551961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.197 [2024-07-15 20:43:50.555526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.197 [2024-07-15 20:43:50.564740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.197 [2024-07-15 20:43:50.565406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.197 [2024-07-15 20:43:50.565444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.197 [2024-07-15 20:43:50.565455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.197 [2024-07-15 20:43:50.565695] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.197 [2024-07-15 20:43:50.565919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.197 [2024-07-15 20:43:50.565928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.197 [2024-07-15 20:43:50.565936] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.197 [2024-07-15 20:43:50.569505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.458 [2024-07-15 20:43:50.578730] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.458 [2024-07-15 20:43:50.579341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.458 [2024-07-15 20:43:50.579380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.458 [2024-07-15 20:43:50.579392] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.458 [2024-07-15 20:43:50.579634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.458 [2024-07-15 20:43:50.579858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.458 [2024-07-15 20:43:50.579868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.458 [2024-07-15 20:43:50.579881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.458 [2024-07-15 20:43:50.583457] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.458 [2024-07-15 20:43:50.592678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.458 [2024-07-15 20:43:50.593386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.458 [2024-07-15 20:43:50.593424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.458 [2024-07-15 20:43:50.593434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.458 [2024-07-15 20:43:50.593674] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.458 [2024-07-15 20:43:50.593898] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.458 [2024-07-15 20:43:50.593908] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.458 [2024-07-15 20:43:50.593916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.458 [2024-07-15 20:43:50.597481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.458 [2024-07-15 20:43:50.606490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.458 [2024-07-15 20:43:50.607159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.458 [2024-07-15 20:43:50.607197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.458 [2024-07-15 20:43:50.607208] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.458 [2024-07-15 20:43:50.607456] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.458 [2024-07-15 20:43:50.607681] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.458 [2024-07-15 20:43:50.607691] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.458 [2024-07-15 20:43:50.607699] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.458 [2024-07-15 20:43:50.611260] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.458 [2024-07-15 20:43:50.620490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.458 [2024-07-15 20:43:50.621193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.621239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.621252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.621493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.621717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.621726] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.621734] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.625291] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.634310] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.635020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.635066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.635077] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.635328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.635553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.635563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.635570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.639123] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.648128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.648841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.648880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.648890] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.649130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.649365] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.649375] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.649383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.652939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.661945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.662614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.662652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.662663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.662902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.663126] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.663136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.663144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.666707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.675757] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.676365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.676385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.676393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.676614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.676839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.676849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.676856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.680411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.689638] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.690336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.690375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.690385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.690625] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.690849] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.690859] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.690867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.694431] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.703441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.704145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.704183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.704195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.704446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.704671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.704681] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.704688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.708243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.717247] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.717937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.717974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.717985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.718225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.718459] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.718470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.718477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.722038] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.731058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.731728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.731766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.731776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.732016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.732251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.732261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.732269] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.735824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.745043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.745757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.745796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.745807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.746046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.746282] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.746292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.746300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.749860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.758877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.459 [2024-07-15 20:43:50.759563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.459 [2024-07-15 20:43:50.759601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.459 [2024-07-15 20:43:50.759612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.459 [2024-07-15 20:43:50.759852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.459 [2024-07-15 20:43:50.760076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.459 [2024-07-15 20:43:50.760085] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.459 [2024-07-15 20:43:50.760093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.459 [2024-07-15 20:43:50.763657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.459 [2024-07-15 20:43:50.772871] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.460 [2024-07-15 20:43:50.773593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.460 [2024-07-15 20:43:50.773631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.460 [2024-07-15 20:43:50.773646] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.460 [2024-07-15 20:43:50.773887] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.460 [2024-07-15 20:43:50.774111] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.460 [2024-07-15 20:43:50.774120] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.460 [2024-07-15 20:43:50.774128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.460 [2024-07-15 20:43:50.777694] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.460 [2024-07-15 20:43:50.786709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.460 [2024-07-15 20:43:50.787437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.460 [2024-07-15 20:43:50.787475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.460 [2024-07-15 20:43:50.787486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.460 [2024-07-15 20:43:50.787726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.460 [2024-07-15 20:43:50.787950] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.460 [2024-07-15 20:43:50.787960] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.460 [2024-07-15 20:43:50.787968] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.460 [2024-07-15 20:43:50.791531] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.460 [2024-07-15 20:43:50.800542] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.460 [2024-07-15 20:43:50.801149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.460 [2024-07-15 20:43:50.801168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.460 [2024-07-15 20:43:50.801176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.460 [2024-07-15 20:43:50.801403] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.460 [2024-07-15 20:43:50.801624] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.460 [2024-07-15 20:43:50.801632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.460 [2024-07-15 20:43:50.801639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.460 [2024-07-15 20:43:50.805190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.460 [2024-07-15 20:43:50.814401] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.460 [2024-07-15 20:43:50.814953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.460 [2024-07-15 20:43:50.814969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.460 [2024-07-15 20:43:50.814977] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.460 [2024-07-15 20:43:50.815196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.460 [2024-07-15 20:43:50.815422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.460 [2024-07-15 20:43:50.815436] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.460 [2024-07-15 20:43:50.815443] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.460 [2024-07-15 20:43:50.818992] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.460 [2024-07-15 20:43:50.828220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.460 [2024-07-15 20:43:50.828770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.460 [2024-07-15 20:43:50.828787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.460 [2024-07-15 20:43:50.828794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.460 [2024-07-15 20:43:50.829013] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.460 [2024-07-15 20:43:50.829239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.460 [2024-07-15 20:43:50.829249] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.460 [2024-07-15 20:43:50.829256] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.460 [2024-07-15 20:43:50.832819] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.721 [2024-07-15 20:43:50.842045] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.721 [2024-07-15 20:43:50.842646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 20:43:50.842662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.721 [2024-07-15 20:43:50.842670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.721 [2024-07-15 20:43:50.842889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.721 [2024-07-15 20:43:50.843108] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.721 [2024-07-15 20:43:50.843118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.721 [2024-07-15 20:43:50.843125] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.721 [2024-07-15 20:43:50.846683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.721 [2024-07-15 20:43:50.855917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.721 [2024-07-15 20:43:50.857036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 20:43:50.857070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.721 [2024-07-15 20:43:50.857080] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.721 [2024-07-15 20:43:50.857328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.721 [2024-07-15 20:43:50.857553] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.721 [2024-07-15 20:43:50.857563] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.721 [2024-07-15 20:43:50.857571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.721 [2024-07-15 20:43:50.861137] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.721 [2024-07-15 20:43:50.869795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.721 [2024-07-15 20:43:50.870496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.721 [2024-07-15 20:43:50.870533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.721 [2024-07-15 20:43:50.870544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.721 [2024-07-15 20:43:50.870784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.721 [2024-07-15 20:43:50.871008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.721 [2024-07-15 20:43:50.871017] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.721 [2024-07-15 20:43:50.871025] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.721 [2024-07-15 20:43:50.874595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.721 [2024-07-15 20:43:50.883620] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.884279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.884317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.884330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.884573] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.884806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.884816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.884824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.888390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.897618] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.898360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.898399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.898409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.898649] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.898873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.898883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.898890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.902452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.911463] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.912164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.912202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.912219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.912469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.912693] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.912703] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.912710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.916267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.925275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.925943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.925981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.925992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.926240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.926465] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.926474] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.926482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.930039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.939276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.939978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.940017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.940028] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.940276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.940501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.940510] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.940517] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.944072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.953077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.953753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.953791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.953802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.954042] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.954275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.954286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.954298] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.957853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.967072] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.967779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.967817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.967828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.968068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.968301] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.968311] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.968319] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.971872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.980880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.981556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.981594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.981605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.981845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.982069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.982078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.982086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.985659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:50.994875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:50.995566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:50.995604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:50.995615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:50.995854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:50.996078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:50.996087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:50.996095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:50.999663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:51.008882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:51.009570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:51.009608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.722 [2024-07-15 20:43:51.009618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.722 [2024-07-15 20:43:51.009858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.722 [2024-07-15 20:43:51.010083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.722 [2024-07-15 20:43:51.010092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.722 [2024-07-15 20:43:51.010100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.722 [2024-07-15 20:43:51.013665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.722 [2024-07-15 20:43:51.022877] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.722 [2024-07-15 20:43:51.023557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.722 [2024-07-15 20:43:51.023595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.723 [2024-07-15 20:43:51.023606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.723 [2024-07-15 20:43:51.023845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.723 [2024-07-15 20:43:51.024070] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.723 [2024-07-15 20:43:51.024079] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.723 [2024-07-15 20:43:51.024087] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.723 [2024-07-15 20:43:51.027653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.723 [2024-07-15 20:43:51.036882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.723 [2024-07-15 20:43:51.037476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 20:43:51.037495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.723 [2024-07-15 20:43:51.037504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.723 [2024-07-15 20:43:51.037724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.723 [2024-07-15 20:43:51.037945] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.723 [2024-07-15 20:43:51.037954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.723 [2024-07-15 20:43:51.037961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.723 [2024-07-15 20:43:51.041513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.723 [2024-07-15 20:43:51.050727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.723 [2024-07-15 20:43:51.051331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 20:43:51.051369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.723 [2024-07-15 20:43:51.051382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.723 [2024-07-15 20:43:51.051626] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.723 [2024-07-15 20:43:51.051851] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.723 [2024-07-15 20:43:51.051860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.723 [2024-07-15 20:43:51.051868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.723 [2024-07-15 20:43:51.055435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.723 [2024-07-15 20:43:51.064684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.723 [2024-07-15 20:43:51.065347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 20:43:51.065385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.723 [2024-07-15 20:43:51.065397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.723 [2024-07-15 20:43:51.065638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.723 [2024-07-15 20:43:51.065862] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.723 [2024-07-15 20:43:51.065872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.723 [2024-07-15 20:43:51.065880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.723 [2024-07-15 20:43:51.069444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.723 [2024-07-15 20:43:51.078662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.723 [2024-07-15 20:43:51.079351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 20:43:51.079389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.723 [2024-07-15 20:43:51.079400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.723 [2024-07-15 20:43:51.079640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.723 [2024-07-15 20:43:51.079864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.723 [2024-07-15 20:43:51.079874] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.723 [2024-07-15 20:43:51.079881] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.723 [2024-07-15 20:43:51.083443] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.723 [2024-07-15 20:43:51.092460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.723 [2024-07-15 20:43:51.093166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.723 [2024-07-15 20:43:51.093204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.723 [2024-07-15 20:43:51.093215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.723 [2024-07-15 20:43:51.093463] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.723 [2024-07-15 20:43:51.093688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.723 [2024-07-15 20:43:51.093698] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.723 [2024-07-15 20:43:51.093710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.723 [2024-07-15 20:43:51.097269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.106284] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.106950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.106989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.107000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.107250] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.107476] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.107486] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.107495] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.111050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.120320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.120905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.120943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.120954] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.121193] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.121429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.121441] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.121449] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.125006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.134317] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.134894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.134914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.134922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.135142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.135369] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.135379] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.135386] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.138937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.148149] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.148711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.148732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.148740] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.148960] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.149180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.149189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.149195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.152940] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.161951] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.162614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.162652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.162663] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.162902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.163127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.163136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.163144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.166714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.175935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.176621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.176659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.176670] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.176910] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.177134] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.177143] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.177151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.180717] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.189737] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.190507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.190545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.190556] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.190796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.191024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.191034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.191042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.194606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.203617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.204327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.204365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.204376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.204615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.204840] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.204849] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.204857] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.985 [2024-07-15 20:43:51.208423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.985 [2024-07-15 20:43:51.217441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.985 [2024-07-15 20:43:51.218118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.985 [2024-07-15 20:43:51.218157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.985 [2024-07-15 20:43:51.218167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.985 [2024-07-15 20:43:51.218416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.985 [2024-07-15 20:43:51.218640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.985 [2024-07-15 20:43:51.218650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.985 [2024-07-15 20:43:51.218658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.222211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.231437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.232047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.232066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.232074] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.232310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.232531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.232540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.232547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.236102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.245321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.245842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.245859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.245867] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.246086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.246312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.246322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.246329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.249946] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.259164] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.259724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.259741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.259748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.259968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.260187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.260196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.260203] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.263756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.273007] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.273716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.273755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.273766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.274008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.274241] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.274251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.274259] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.277815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.286830] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.287569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.287606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.287621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.287861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.288085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.288094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.288102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.291666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.300678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.301166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.301186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.301194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.301419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.301641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.301650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.301657] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.305205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.314632] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.315197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.315214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.315221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.315446] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.315666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.315675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.315682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.319235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.328451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.329081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.329119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.329130] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.329378] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.329603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.329617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.329625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.333191] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.342417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.342990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.343009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.343017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.343243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.343463] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.343473] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.343480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.347030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:58.986 [2024-07-15 20:43:51.356254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:58.986 [2024-07-15 20:43:51.356936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.986 [2024-07-15 20:43:51.356974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:58.986 [2024-07-15 20:43:51.356985] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:58.986 [2024-07-15 20:43:51.357225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:58.986 [2024-07-15 20:43:51.357456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:58.986 [2024-07-15 20:43:51.357467] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:58.986 [2024-07-15 20:43:51.357475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:58.986 [2024-07-15 20:43:51.361035] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.248 [2024-07-15 20:43:51.370259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.248 [2024-07-15 20:43:51.370815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.248 [2024-07-15 20:43:51.370834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.248 [2024-07-15 20:43:51.370842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.248 [2024-07-15 20:43:51.371063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.248 [2024-07-15 20:43:51.371289] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.248 [2024-07-15 20:43:51.371298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.248 [2024-07-15 20:43:51.371305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.248 [2024-07-15 20:43:51.374859] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.248 [2024-07-15 20:43:51.384092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.248 [2024-07-15 20:43:51.384832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.248 [2024-07-15 20:43:51.384870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.248 [2024-07-15 20:43:51.384881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.248 [2024-07-15 20:43:51.385121] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.248 [2024-07-15 20:43:51.385353] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.248 [2024-07-15 20:43:51.385363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.248 [2024-07-15 20:43:51.385371] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.248 [2024-07-15 20:43:51.388939] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.248 [2024-07-15 20:43:51.397952] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.248 [2024-07-15 20:43:51.398579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.248 [2024-07-15 20:43:51.398617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.248 [2024-07-15 20:43:51.398628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.248 [2024-07-15 20:43:51.398868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.248 [2024-07-15 20:43:51.399092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.248 [2024-07-15 20:43:51.399102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.248 [2024-07-15 20:43:51.399110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.248 [2024-07-15 20:43:51.402676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.248 [2024-07-15 20:43:51.411900] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.248 [2024-07-15 20:43:51.412578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.248 [2024-07-15 20:43:51.412616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.248 [2024-07-15 20:43:51.412627] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.248 [2024-07-15 20:43:51.412867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.248 [2024-07-15 20:43:51.413091] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.248 [2024-07-15 20:43:51.413100] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.248 [2024-07-15 20:43:51.413108] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.248 [2024-07-15 20:43:51.416673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.248 [2024-07-15 20:43:51.425905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.248 [2024-07-15 20:43:51.426571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.248 [2024-07-15 20:43:51.426610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.248 [2024-07-15 20:43:51.426621] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.248 [2024-07-15 20:43:51.426865] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.248 [2024-07-15 20:43:51.427089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.248 [2024-07-15 20:43:51.427098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.248 [2024-07-15 20:43:51.427105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.248 [2024-07-15 20:43:51.430672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.248 [2024-07-15 20:43:51.439906] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.440580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.440619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.440629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.440869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.441093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.441103] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.441110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.444675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.453894] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.454541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.454580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.454591] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.454830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.455054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.455064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.455072] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.458636] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.467857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.468467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.468486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.468494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.468714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.468935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.468945] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.468961] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.472520] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.481740] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.482466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.482504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.482515] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.482754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.482979] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.482988] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.482996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.486560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.495583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.496192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.496212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.496220] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.496445] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.496666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.496675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.496682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.500233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.509451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.510037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.510054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.510062] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.510286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.510506] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.510516] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.510523] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.514072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.523294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.523896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.523935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.523945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.524185] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.524417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.524429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.524436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.527990] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.537228] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.537799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.537819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.537827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.538047] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.538274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.538284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.538291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.541841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.551061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.551767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.551805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.551816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.552056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.552288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.552299] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.552307] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.555862] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.564876] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.565389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.565426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.565438] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.565684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.565908] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.565917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.565925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.569488] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.249 [2024-07-15 20:43:51.578710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.249 [2024-07-15 20:43:51.579186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.249 [2024-07-15 20:43:51.579204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.249 [2024-07-15 20:43:51.579212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.249 [2024-07-15 20:43:51.579437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.249 [2024-07-15 20:43:51.579658] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.249 [2024-07-15 20:43:51.579668] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.249 [2024-07-15 20:43:51.579677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.249 [2024-07-15 20:43:51.583228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.250 [2024-07-15 20:43:51.592665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.250 [2024-07-15 20:43:51.593331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.250 [2024-07-15 20:43:51.593369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.250 [2024-07-15 20:43:51.593382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.250 [2024-07-15 20:43:51.593622] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.250 [2024-07-15 20:43:51.593846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.250 [2024-07-15 20:43:51.593856] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.250 [2024-07-15 20:43:51.593864] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.250 [2024-07-15 20:43:51.597423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.250 [2024-07-15 20:43:51.606641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.250 [2024-07-15 20:43:51.607135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.250 [2024-07-15 20:43:51.607154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.250 [2024-07-15 20:43:51.607162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.250 [2024-07-15 20:43:51.607387] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.250 [2024-07-15 20:43:51.607608] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.250 [2024-07-15 20:43:51.607617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.250 [2024-07-15 20:43:51.607629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.250 [2024-07-15 20:43:51.611182] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.250 [2024-07-15 20:43:51.620615] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.250 [2024-07-15 20:43:51.621342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.250 [2024-07-15 20:43:51.621381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.250 [2024-07-15 20:43:51.621393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.250 [2024-07-15 20:43:51.621635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.250 [2024-07-15 20:43:51.621860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.250 [2024-07-15 20:43:51.621869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.250 [2024-07-15 20:43:51.621877] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.250 [2024-07-15 20:43:51.625442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.634473] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.635182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.512 [2024-07-15 20:43:51.635220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.512 [2024-07-15 20:43:51.635241] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.512 [2024-07-15 20:43:51.635482] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.512 [2024-07-15 20:43:51.635707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.512 [2024-07-15 20:43:51.635717] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.512 [2024-07-15 20:43:51.635725] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.512 [2024-07-15 20:43:51.639283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.648299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.649007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.512 [2024-07-15 20:43:51.649045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.512 [2024-07-15 20:43:51.649056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.512 [2024-07-15 20:43:51.649304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.512 [2024-07-15 20:43:51.649528] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.512 [2024-07-15 20:43:51.649538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.512 [2024-07-15 20:43:51.649546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.512 [2024-07-15 20:43:51.653103] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.662117] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.662822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.512 [2024-07-15 20:43:51.662864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.512 [2024-07-15 20:43:51.662875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.512 [2024-07-15 20:43:51.663115] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.512 [2024-07-15 20:43:51.663346] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.512 [2024-07-15 20:43:51.663356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.512 [2024-07-15 20:43:51.663364] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.512 [2024-07-15 20:43:51.666922] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.675934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.676592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.512 [2024-07-15 20:43:51.676630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.512 [2024-07-15 20:43:51.676640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.512 [2024-07-15 20:43:51.676880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.512 [2024-07-15 20:43:51.677104] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.512 [2024-07-15 20:43:51.677113] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.512 [2024-07-15 20:43:51.677121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.512 [2024-07-15 20:43:51.680687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.689914] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.690594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.512 [2024-07-15 20:43:51.690632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.512 [2024-07-15 20:43:51.690642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.512 [2024-07-15 20:43:51.690882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.512 [2024-07-15 20:43:51.691106] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.512 [2024-07-15 20:43:51.691116] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.512 [2024-07-15 20:43:51.691124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.512 [2024-07-15 20:43:51.694688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.703742] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.704468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.512 [2024-07-15 20:43:51.704506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.512 [2024-07-15 20:43:51.704517] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.512 [2024-07-15 20:43:51.704757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.512 [2024-07-15 20:43:51.704985] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.512 [2024-07-15 20:43:51.704995] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.512 [2024-07-15 20:43:51.705003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.512 [2024-07-15 20:43:51.708569] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.717582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.718291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.512 [2024-07-15 20:43:51.718329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.512 [2024-07-15 20:43:51.718340] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.512 [2024-07-15 20:43:51.718580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.512 [2024-07-15 20:43:51.718804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.512 [2024-07-15 20:43:51.718813] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.512 [2024-07-15 20:43:51.718821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.512 [2024-07-15 20:43:51.722385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.512 [2024-07-15 20:43:51.731397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.512 [2024-07-15 20:43:51.732126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.732164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.732175] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.732432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.732657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.732667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.732675] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.736240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.745253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.745845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.745864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.745872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.746092] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.746319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.746329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.746336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.749895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.759111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.759661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.759678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.759685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.759905] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.760124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.760133] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.760140] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.763693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.772909] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.773613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.773651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.773662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.773901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.774125] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.774135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.774143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.777706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.786714] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.787352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.787390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.787402] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.787642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.787875] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.787885] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.787894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.791456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.800679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.801264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.801303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.801320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.801562] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.801786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.801796] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.801803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.805366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.814587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.815186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.815205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.815213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.815440] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.815661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.815669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.815676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.819225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.828447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.829043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.829060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.829068] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.829293] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.829514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.829524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.829531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.833092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.842314] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.842893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.842931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.842942] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.843181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.843413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.843428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.843436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.846993] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.856216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.856901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.856939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.856950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.857189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.857422] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.857432] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.857440] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.860998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.870222] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.513 [2024-07-15 20:43:51.870679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.513 [2024-07-15 20:43:51.870701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.513 [2024-07-15 20:43:51.870710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.513 [2024-07-15 20:43:51.870931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.513 [2024-07-15 20:43:51.871152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.513 [2024-07-15 20:43:51.871160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.513 [2024-07-15 20:43:51.871167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.513 [2024-07-15 20:43:51.874725] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.513 [2024-07-15 20:43:51.884156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.514 [2024-07-15 20:43:51.884758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.514 [2024-07-15 20:43:51.884774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.514 [2024-07-15 20:43:51.884782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.514 [2024-07-15 20:43:51.885002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.514 [2024-07-15 20:43:51.885221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.514 [2024-07-15 20:43:51.885235] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.514 [2024-07-15 20:43:51.885243] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.514 [2024-07-15 20:43:51.888802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.898030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.898621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.898637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.898645] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.898864] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.899084] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.899094] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.899101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.902656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.911872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.912529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.912567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.912577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.912817] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.913041] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.913050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.913058] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.916620] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.925837] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.926414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.926433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.926441] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.926662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.926882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.926890] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.926898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.930452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.939677] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.940347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.940385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.940397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.940642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.940866] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.940876] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.940884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.944452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.953672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.954348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.954386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.954398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.954639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.954863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.954873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.954880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.958447] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.967662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.968358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.968396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.968406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.968646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.968870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.968879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.968887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.972452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.981460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.982164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.982202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.982212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.982460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.982686] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.982695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.982707] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.986266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:51.995287] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:51.995815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:51.995853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:51.995865] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:51.996107] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:51.996340] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:51.996350] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:51.996358] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:51.999914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:52.009132] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:52.009800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:52.009837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:52.009848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:52.010088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:52.010319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:52.010329] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:52.010337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:52.013893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:52.022962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:52.023640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:52.023678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:52.023689] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:52.023928] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:52.024152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:52.024161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:52.024169] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:52.027734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:52.036962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:52.037682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:52.037719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:52.037730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:52.037970] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:52.038194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:52.038203] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:52.038211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:52.041774] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:52.050785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:52.051242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:52.051264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:52.051272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:52.051493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:52.051713] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:52.051723] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:52.051730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.775 [2024-07-15 20:43:52.055289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.775 [2024-07-15 20:43:52.064713] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.775 [2024-07-15 20:43:52.065346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.775 [2024-07-15 20:43:52.065385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.775 [2024-07-15 20:43:52.065395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.775 [2024-07-15 20:43:52.065635] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.775 [2024-07-15 20:43:52.065859] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.775 [2024-07-15 20:43:52.065868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.775 [2024-07-15 20:43:52.065876] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.776 [2024-07-15 20:43:52.069440] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.776 [2024-07-15 20:43:52.078658] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.776 [2024-07-15 20:43:52.079326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.776 [2024-07-15 20:43:52.079347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.776 [2024-07-15 20:43:52.079355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.776 [2024-07-15 20:43:52.079576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.776 [2024-07-15 20:43:52.079801] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.776 [2024-07-15 20:43:52.079811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.776 [2024-07-15 20:43:52.079818] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.776 [2024-07-15 20:43:52.083373] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.776 [2024-07-15 20:43:52.092592] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.776 [2024-07-15 20:43:52.093224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.776 [2024-07-15 20:43:52.093269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.776 [2024-07-15 20:43:52.093280] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.776 [2024-07-15 20:43:52.093519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.776 [2024-07-15 20:43:52.093743] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.776 [2024-07-15 20:43:52.093753] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.776 [2024-07-15 20:43:52.093761] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.776 [2024-07-15 20:43:52.097320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.776 [2024-07-15 20:43:52.106536] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.776 [2024-07-15 20:43:52.107270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.776 [2024-07-15 20:43:52.107308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.776 [2024-07-15 20:43:52.107320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.776 [2024-07-15 20:43:52.107560] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.776 [2024-07-15 20:43:52.107784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.776 [2024-07-15 20:43:52.107795] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.776 [2024-07-15 20:43:52.107803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.776 [2024-07-15 20:43:52.111366] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.776 [2024-07-15 20:43:52.120377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.776 [2024-07-15 20:43:52.120944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.776 [2024-07-15 20:43:52.120963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.776 [2024-07-15 20:43:52.120971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.776 [2024-07-15 20:43:52.121191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.776 [2024-07-15 20:43:52.121417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.776 [2024-07-15 20:43:52.121427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.776 [2024-07-15 20:43:52.121434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.776 [2024-07-15 20:43:52.124989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.776 [2024-07-15 20:43:52.134210] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.776 [2024-07-15 20:43:52.134877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.776 [2024-07-15 20:43:52.134915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.776 [2024-07-15 20:43:52.134926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.776 [2024-07-15 20:43:52.135165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.776 [2024-07-15 20:43:52.135399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.776 [2024-07-15 20:43:52.135409] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.776 [2024-07-15 20:43:52.135417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.776 [2024-07-15 20:43:52.138969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:59.776 [2024-07-15 20:43:52.148186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:29:59.776 [2024-07-15 20:43:52.148728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.776 [2024-07-15 20:43:52.148748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:29:59.776 [2024-07-15 20:43:52.148756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:29:59.776 [2024-07-15 20:43:52.148976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:29:59.776 [2024-07-15 20:43:52.149197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:29:59.776 [2024-07-15 20:43:52.149206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:29:59.776 [2024-07-15 20:43:52.149213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:29:59.776 [2024-07-15 20:43:52.152771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.162063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.162678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.162696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.162704] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.162924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.163144] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.163153] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.163160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.166715] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.175926] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.176477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.176503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.176512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.176732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.176952] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.176961] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.176967] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.180522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.189743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.190292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.190308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.190316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.190535] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.190755] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.190764] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.190771] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.194323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.203741] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.204485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.204523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.204534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.204774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.204997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.205007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.205014] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.208579] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.217586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.218299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.218337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.218349] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.218591] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.218820] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.218830] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.218838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.222402] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.231408] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.232074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.232112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.232123] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.232379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.232605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.232614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.232622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.236177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.245396] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.246055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.246093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.246103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.246352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.246577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.246587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.246594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.250149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.259380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.260087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.260125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.260136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.260384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.260609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.260619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.260627] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.264180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.273197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.273817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.273837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.273845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.274065] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.274290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.274300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.274306] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.277857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.287070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.287722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.287760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.287771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.288010] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.288242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.288253] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.288261] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.291828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.301051] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.301562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.301600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.301612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.301853] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.302077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.302087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.302094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.305659] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.314879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.315559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.315598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.315613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.315852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.316076] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.316086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.316093] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.319657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.328878] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.329540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.329578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.329589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.329829] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.330053] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.330062] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.330070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.333646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.342866] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.343521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.343559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.343571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.343811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.344034] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.344044] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.344052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.347619] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.356840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.357441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.357461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.357469] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.357689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.357909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.357922] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.357929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.361493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.370710] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.371336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.371375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.371387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.371630] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.371854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.371863] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.371871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.375432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.384650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.385330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.385368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.385380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.385621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.036 [2024-07-15 20:43:52.385845] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.036 [2024-07-15 20:43:52.385854] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.036 [2024-07-15 20:43:52.385862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.036 [2024-07-15 20:43:52.389426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.036 [2024-07-15 20:43:52.398648] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.036 [2024-07-15 20:43:52.399285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.036 [2024-07-15 20:43:52.399312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.036 [2024-07-15 20:43:52.399320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.036 [2024-07-15 20:43:52.399545] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.037 [2024-07-15 20:43:52.399767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.037 [2024-07-15 20:43:52.399776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.037 [2024-07-15 20:43:52.399784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.037 [2024-07-15 20:43:52.403345] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.037 [2024-07-15 20:43:52.412559] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.037 [2024-07-15 20:43:52.413239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.037 [2024-07-15 20:43:52.413277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.037 [2024-07-15 20:43:52.413288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.037 [2024-07-15 20:43:52.413527] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.037 [2024-07-15 20:43:52.413750] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.037 [2024-07-15 20:43:52.413760] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.037 [2024-07-15 20:43:52.413767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.417329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.426547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.427266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.427305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.427316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.427555] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.427779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.427789] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.427797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.431362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.440380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.441061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.441099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.441110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.441358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.441584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.441593] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.441600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.445154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.454373] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.455078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.455116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.455127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.455379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.455605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.455614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.455622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.459177] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.468194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.468863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.468901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.468911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.469151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.469385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.469396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.469404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.472964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.482192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.482880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.482918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.482929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.483168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.483401] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.483412] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.483420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.486976] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.495997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.496656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.496695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.496705] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.496945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.497169] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.497179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.497191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.500757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.509976] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.510648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.510686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.510698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.510939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.511164] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.511173] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.511181] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.514744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.523958] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.524625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.524663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.524674] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.524914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.525138] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.525147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.525156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.528721] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.537946] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.538590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.538639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.538878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.539102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.539112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.539120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.542687] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.551908] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.552585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.552628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.552639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.552879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.553103] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.553112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.553120] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.556688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.565903] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.566539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.566577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.566589] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.566830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.567054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.567063] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.567071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.570642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.579863] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.580569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.580607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.580618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.297 [2024-07-15 20:43:52.580858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.297 [2024-07-15 20:43:52.581082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.297 [2024-07-15 20:43:52.581092] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.297 [2024-07-15 20:43:52.581099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.297 [2024-07-15 20:43:52.584663] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.297 [2024-07-15 20:43:52.593694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.297 [2024-07-15 20:43:52.594281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.297 [2024-07-15 20:43:52.594307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.297 [2024-07-15 20:43:52.594316] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.298 [2024-07-15 20:43:52.594541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.298 [2024-07-15 20:43:52.594767] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.298 [2024-07-15 20:43:52.594776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.298 [2024-07-15 20:43:52.594784] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.298 [2024-07-15 20:43:52.598346] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.298 [2024-07-15 20:43:52.607565] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.298 [2024-07-15 20:43:52.608248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.298 [2024-07-15 20:43:52.608287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.298 [2024-07-15 20:43:52.608297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.298 [2024-07-15 20:43:52.608537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.298 [2024-07-15 20:43:52.608761] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.298 [2024-07-15 20:43:52.608770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.298 [2024-07-15 20:43:52.608777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.298 [2024-07-15 20:43:52.612349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.298 [2024-07-15 20:43:52.621367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.298 [2024-07-15 20:43:52.622076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.298 [2024-07-15 20:43:52.622114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.298 [2024-07-15 20:43:52.622125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.298 [2024-07-15 20:43:52.622374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.298 [2024-07-15 20:43:52.622599] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.298 [2024-07-15 20:43:52.622608] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.298 [2024-07-15 20:43:52.622616] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.298 [2024-07-15 20:43:52.626173] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.298 [2024-07-15 20:43:52.635191] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.298 [2024-07-15 20:43:52.635862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.298 [2024-07-15 20:43:52.635900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.298 [2024-07-15 20:43:52.635911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.298 [2024-07-15 20:43:52.636151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.298 [2024-07-15 20:43:52.636385] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.298 [2024-07-15 20:43:52.636396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.298 [2024-07-15 20:43:52.636404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.298 [2024-07-15 20:43:52.639964] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.298 [2024-07-15 20:43:52.649175] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.298 [2024-07-15 20:43:52.649854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.298 [2024-07-15 20:43:52.649891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.298 [2024-07-15 20:43:52.649902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.298 [2024-07-15 20:43:52.650142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.298 [2024-07-15 20:43:52.650376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.298 [2024-07-15 20:43:52.650386] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.298 [2024-07-15 20:43:52.650394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.298 [2024-07-15 20:43:52.653950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.298 [2024-07-15 20:43:52.663169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.298 [2024-07-15 20:43:52.663885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.298 [2024-07-15 20:43:52.663923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.298 [2024-07-15 20:43:52.663934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.298 [2024-07-15 20:43:52.664174] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.298 [2024-07-15 20:43:52.664409] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.298 [2024-07-15 20:43:52.664419] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.298 [2024-07-15 20:43:52.664427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.298 [2024-07-15 20:43:52.667983] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.677008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.677609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.677628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.677637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.677857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.678078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.678086] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.678094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.681656] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.690872] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.691560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.691599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.691614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.691854] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.692078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.692087] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.692095] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.695662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.704669] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.705238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.705258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.705266] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.705486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.705705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.705715] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.705722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.709273] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.718480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.719065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.719081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.719089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.719317] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.719537] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.719547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.719555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.723141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.732368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.733046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.733084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.733095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.733346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.733571] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.733585] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.733592] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.737149] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.746366] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.747061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.747099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.747109] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.747360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.747585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.747594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.747602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.751156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.760169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.760888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.760926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.760937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.761176] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.761411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.761422] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.761430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.764986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.773994] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.774704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.774742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.774752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.774992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.775217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.775226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.775247] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.778803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.787815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.788463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.788501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.788512] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.788752] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.788976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.788986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.788994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.792570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.801785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.802485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.802523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.802534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.802774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.802998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.803007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.803015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.806580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.815586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.816168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.816187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.816195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.816422] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.816643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.816653] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.816660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.820212] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.829423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.830017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.830033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.830044] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.830271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.830491] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.830501] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.830508] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.834066] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.843278] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.843752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.559 [2024-07-15 20:43:52.843768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.559 [2024-07-15 20:43:52.843775] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.559 [2024-07-15 20:43:52.843994] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.559 [2024-07-15 20:43:52.844214] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.559 [2024-07-15 20:43:52.844223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.559 [2024-07-15 20:43:52.844236] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.559 [2024-07-15 20:43:52.847834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.559 [2024-07-15 20:43:52.857258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.559 [2024-07-15 20:43:52.857845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.560 [2024-07-15 20:43:52.857861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.560 [2024-07-15 20:43:52.857868] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.560 [2024-07-15 20:43:52.858087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.560 [2024-07-15 20:43:52.858314] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.560 [2024-07-15 20:43:52.858324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.560 [2024-07-15 20:43:52.858331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.560 [2024-07-15 20:43:52.861878] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.560 [2024-07-15 20:43:52.871097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.560 [2024-07-15 20:43:52.871565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.560 [2024-07-15 20:43:52.871581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.560 [2024-07-15 20:43:52.871588] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.560 [2024-07-15 20:43:52.871807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.560 [2024-07-15 20:43:52.872027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.560 [2024-07-15 20:43:52.872043] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.560 [2024-07-15 20:43:52.872050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.560 [2024-07-15 20:43:52.875604] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.560 [2024-07-15 20:43:52.885027] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.560 [2024-07-15 20:43:52.885695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.560 [2024-07-15 20:43:52.885732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.560 [2024-07-15 20:43:52.885743] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.560 [2024-07-15 20:43:52.885982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.560 [2024-07-15 20:43:52.886207] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.560 [2024-07-15 20:43:52.886216] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.560 [2024-07-15 20:43:52.886224] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.560 [2024-07-15 20:43:52.889792] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.560 [2024-07-15 20:43:52.899019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.560 [2024-07-15 20:43:52.899679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.560 [2024-07-15 20:43:52.899717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.560 [2024-07-15 20:43:52.899728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.560 [2024-07-15 20:43:52.899968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.560 [2024-07-15 20:43:52.900192] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.560 [2024-07-15 20:43:52.900202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.560 [2024-07-15 20:43:52.900212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.560 [2024-07-15 20:43:52.903778] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.560 [2024-07-15 20:43:52.913010] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.560 [2024-07-15 20:43:52.913664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.560 [2024-07-15 20:43:52.913701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.560 [2024-07-15 20:43:52.913712] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.560 [2024-07-15 20:43:52.913952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.560 [2024-07-15 20:43:52.914175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.560 [2024-07-15 20:43:52.914184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.560 [2024-07-15 20:43:52.914191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.560 [2024-07-15 20:43:52.917762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.560 [2024-07-15 20:43:52.926997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.560 [2024-07-15 20:43:52.927665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.560 [2024-07-15 20:43:52.927702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.560 [2024-07-15 20:43:52.927713] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.560 [2024-07-15 20:43:52.927953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.560 [2024-07-15 20:43:52.928177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.560 [2024-07-15 20:43:52.928185] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.560 [2024-07-15 20:43:52.928193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.560 [2024-07-15 20:43:52.931764] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.821 [2024-07-15 20:43:52.941006] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:52.941594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:52.941614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:52.941622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:52.941843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:52.942062] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:52.942070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:52.942077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:52.945642] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:52.954874] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:52.955424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:52.955462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:52.955475] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:52.955716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:52.955940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:52.955949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:52.955957] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:52.959528] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:52.968765] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:52.969522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:52.969560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:52.969571] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:52.969815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:52.970038] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:52.970047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:52.970054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:52.973623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:52.982684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:52.983278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:52.983303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:52.983312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:52.983536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:52.983757] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:52.983772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:52.983779] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:52.987344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:52.996577] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:52.997280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:52.997318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:52.997330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:52.997571] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:52.997794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:52.997802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:52.997810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:53.001375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:53.010385] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:53.011082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:53.011117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:53.011133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:53.011380] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:53.011603] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:53.011612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:53.011624] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:53.015178] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:53.024202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:53.024881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:53.024919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:53.024932] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:53.025173] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:53.025406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:53.025416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:53.025424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:53.028982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:53.038022] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:53.038750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:53.038787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:53.038798] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:53.039038] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:53.039270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:53.039280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:53.039288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:53.042848] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:53.051875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:53.052574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:53.052612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:53.052623] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:53.052862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:53.053087] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:53.053095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:53.053102] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:53.056666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:53.065679] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:53.066339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:53.066380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:53.066393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:53.066634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:53.066858] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:53.066868] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:53.066875] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.822 [2024-07-15 20:43:53.070436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.822 [2024-07-15 20:43:53.079657] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.822 [2024-07-15 20:43:53.080272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.822 [2024-07-15 20:43:53.080298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.822 [2024-07-15 20:43:53.080306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.822 [2024-07-15 20:43:53.080531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.822 [2024-07-15 20:43:53.080751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.822 [2024-07-15 20:43:53.080759] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.822 [2024-07-15 20:43:53.080766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 [2024-07-15 20:43:53.084324] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1530759 Killed "${NVMF_APP[@]}" "$@" 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.823 [2024-07-15 20:43:53.093551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.094270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.094307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.094320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.094563] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.094786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.094794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.094801] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1532311 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1532311 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1532311 ']' 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:00.823 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:00.823 [2024-07-15 20:43:53.098368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 [2024-07-15 20:43:53.107490] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.108074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.108110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.108121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.108369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.108593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.108603] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.108611] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 [2024-07-15 20:43:53.112168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 [2024-07-15 20:43:53.121392] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.122059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.122096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.122107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.122353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.122577] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.122586] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.122594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 [2024-07-15 20:43:53.126150] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 [2024-07-15 20:43:53.135386] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.136089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.136126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.136136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.136384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.136613] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.136622] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.136629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 [2024-07-15 20:43:53.140186] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 [2024-07-15 20:43:53.149201] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.149878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.149915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.149926] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.150165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.150397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.150407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.150414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 [2024-07-15 20:43:53.154161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 [2024-07-15 20:43:53.158186] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:30:00.823 [2024-07-15 20:43:53.158237] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.823 [2024-07-15 20:43:53.163186] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.163889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.163926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.163937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.164177] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.164408] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.164418] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.164426] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 [2024-07-15 20:43:53.167982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 [2024-07-15 20:43:53.176995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.177663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.177701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.177711] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.177952] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.178175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.178187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.178195] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 [2024-07-15 20:43:53.181760] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:00.823 [2024-07-15 20:43:53.191054] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:00.823 [2024-07-15 20:43:53.191771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.823 [2024-07-15 20:43:53.191808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:00.823 [2024-07-15 20:43:53.191819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:00.823 [2024-07-15 20:43:53.192059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:00.823 [2024-07-15 20:43:53.192296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:00.823 [2024-07-15 20:43:53.192305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:00.823 [2024-07-15 20:43:53.192313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:00.823 EAL: No free 2048 kB hugepages reported on node 1 00:30:00.823 [2024-07-15 20:43:53.195879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.204901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.205480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.205500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.205508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.205729] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.205949] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.205957] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.205965] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.209522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.218743] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.219305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.219322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.219329] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.219549] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.219768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.219776] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.219783] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.223335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.232560] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.233254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.233292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.233304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.233546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.233770] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.233778] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.233787] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.237364] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.245658] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:01.085 [2024-07-15 20:43:53.246381] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.247069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.247106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.247117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.247363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.247588] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.247596] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.247604] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.251164] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.260177] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.260807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.260826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.260834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.261054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.261280] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.261288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.261295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.264846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.274069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.274796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.274835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.274851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.275093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.275323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.275332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.275340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.278900] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.287919] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.288509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.288548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.288559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.288800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.289024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.289032] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.289039] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.292601] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.299123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.085 [2024-07-15 20:43:53.299146] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.085 [2024-07-15 20:43:53.299152] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.085 [2024-07-15 20:43:53.299158] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.085 [2024-07-15 20:43:53.299162] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.085 [2024-07-15 20:43:53.299367] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.085 [2024-07-15 20:43:53.299763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.085 [2024-07-15 20:43:53.299764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.085 [2024-07-15 20:43:53.301889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.302515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.302552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.302564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.302804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.303028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.303037] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.303044] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.306614] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.315839] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.316580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.316618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.316629] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.316869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.317093] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.317102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.317109] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.320674] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.329686] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.330341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.330379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.330391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.330633] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.330857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.330866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.330874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.334437] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.343678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.344202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.344246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.344258] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.085 [2024-07-15 20:43:53.344498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.085 [2024-07-15 20:43:53.344722] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.085 [2024-07-15 20:43:53.344730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.085 [2024-07-15 20:43:53.344738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.085 [2024-07-15 20:43:53.348294] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.085 [2024-07-15 20:43:53.357516] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.085 [2024-07-15 20:43:53.358162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.085 [2024-07-15 20:43:53.358181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.085 [2024-07-15 20:43:53.358195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.358421] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.358641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.358649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.358656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.362205] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.086 [2024-07-15 20:43:53.371427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.086 [2024-07-15 20:43:53.372000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.086 [2024-07-15 20:43:53.372016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.086 [2024-07-15 20:43:53.372024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.372247] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.372467] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.372476] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.372483] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.376033] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.086 [2024-07-15 20:43:53.385252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.086 [2024-07-15 20:43:53.385821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.086 [2024-07-15 20:43:53.385836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.086 [2024-07-15 20:43:53.385844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.386063] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.386287] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.386296] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.386304] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.389853] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.086 [2024-07-15 20:43:53.399076] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.086 [2024-07-15 20:43:53.399594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.086 [2024-07-15 20:43:53.399631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.086 [2024-07-15 20:43:53.399642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.399882] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.400105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.400118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.400126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.403696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.086 [2024-07-15 20:43:53.412917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.086 [2024-07-15 20:43:53.413615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.086 [2024-07-15 20:43:53.413653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.086 [2024-07-15 20:43:53.413664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.413903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.414127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.414136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.414143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.417707] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.086 [2024-07-15 20:43:53.426805] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.086 [2024-07-15 20:43:53.427347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.086 [2024-07-15 20:43:53.427385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.086 [2024-07-15 20:43:53.427397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.427639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.427863] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.427872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.427879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.431442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.086 [2024-07-15 20:43:53.440672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.086 [2024-07-15 20:43:53.441445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.086 [2024-07-15 20:43:53.441482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.086 [2024-07-15 20:43:53.441493] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.441732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.441956] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.441964] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.441972] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.445536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.086 [2024-07-15 20:43:53.454554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.086 [2024-07-15 20:43:53.455112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.086 [2024-07-15 20:43:53.455151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.086 [2024-07-15 20:43:53.455163] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.086 [2024-07-15 20:43:53.455412] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.086 [2024-07-15 20:43:53.455636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.086 [2024-07-15 20:43:53.455644] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.086 [2024-07-15 20:43:53.455652] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.086 [2024-07-15 20:43:53.459208] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.347 [2024-07-15 20:43:53.468433] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.347 [2024-07-15 20:43:53.469120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.347 [2024-07-15 20:43:53.469157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.347 [2024-07-15 20:43:53.469168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.347 [2024-07-15 20:43:53.469417] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.347 [2024-07-15 20:43:53.469641] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.347 [2024-07-15 20:43:53.469650] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.347 [2024-07-15 20:43:53.469658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.347 [2024-07-15 20:43:53.473213] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.347 [2024-07-15 20:43:53.482437] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.347 [2024-07-15 20:43:53.483140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.347 [2024-07-15 20:43:53.483177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.347 [2024-07-15 20:43:53.483188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.347 [2024-07-15 20:43:53.483436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.347 [2024-07-15 20:43:53.483660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.347 [2024-07-15 20:43:53.483669] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.347 [2024-07-15 20:43:53.483676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.347 [2024-07-15 20:43:53.487235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.347 [2024-07-15 20:43:53.496253] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.347 [2024-07-15 20:43:53.496876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.347 [2024-07-15 20:43:53.496894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.347 [2024-07-15 20:43:53.496902] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.347 [2024-07-15 20:43:53.497127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.347 [2024-07-15 20:43:53.497352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.347 [2024-07-15 20:43:53.497361] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.347 [2024-07-15 20:43:53.497368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.347 [2024-07-15 20:43:53.500920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.347 [2024-07-15 20:43:53.510144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.347 [2024-07-15 20:43:53.510772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.347 [2024-07-15 20:43:53.510789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.347 [2024-07-15 20:43:53.510796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.347 [2024-07-15 20:43:53.511016] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.347 [2024-07-15 20:43:53.511239] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.347 [2024-07-15 20:43:53.511248] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.347 [2024-07-15 20:43:53.511254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.347 [2024-07-15 20:43:53.514805] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.347 [2024-07-15 20:43:53.524020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.347 [2024-07-15 20:43:53.524601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.347 [2024-07-15 20:43:53.524616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.347 [2024-07-15 20:43:53.524624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.347 [2024-07-15 20:43:53.524842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.347 [2024-07-15 20:43:53.525061] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.347 [2024-07-15 20:43:53.525071] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.347 [2024-07-15 20:43:53.525078] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.347 [2024-07-15 20:43:53.528629] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.347 [2024-07-15 20:43:53.537858] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.347 [2024-07-15 20:43:53.538578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.347 [2024-07-15 20:43:53.538615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.347 [2024-07-15 20:43:53.538626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.347 [2024-07-15 20:43:53.538866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.347 [2024-07-15 20:43:53.539089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.539098] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.539111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.542679] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.551693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.552373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.552411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.552423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.552664] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.552888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.552896] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.552904] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.556469] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.565693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.566341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.566377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.566388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.566628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.566852] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.566860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.566868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.570434] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.579655] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.580213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.580258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.580269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.580508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.580732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.580740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.580747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.584305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.593528] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.594248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.594285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.594297] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.594540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.594773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.594782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.594789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.598348] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.607362] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.608110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.608148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.608159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.608407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.608632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.608640] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.608648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.612203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.621213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.621640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.621658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.621666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.621886] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.622105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.622114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.622121] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.625678] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.635118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.635658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.635695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.635706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.635950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.636174] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.636182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.636190] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.639755] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.648977] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.649668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.649706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.649717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.649956] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.650180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.650188] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.650197] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.653758] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.662979] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.663557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.663577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.663585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.663806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.664026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.664034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.664041] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.667595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.676811] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.677486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.677523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.677534] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.677774] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.348 [2024-07-15 20:43:53.677997] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.348 [2024-07-15 20:43:53.678005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.348 [2024-07-15 20:43:53.678017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.348 [2024-07-15 20:43:53.681580] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.348 [2024-07-15 20:43:53.690799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.348 [2024-07-15 20:43:53.691275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.348 [2024-07-15 20:43:53.691301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.348 [2024-07-15 20:43:53.691309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.348 [2024-07-15 20:43:53.691534] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.349 [2024-07-15 20:43:53.691754] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.349 [2024-07-15 20:43:53.691762] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.349 [2024-07-15 20:43:53.691770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.349 [2024-07-15 20:43:53.695338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.349 [2024-07-15 20:43:53.704773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.349 [2024-07-15 20:43:53.705357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.349 [2024-07-15 20:43:53.705394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.349 [2024-07-15 20:43:53.705407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.349 [2024-07-15 20:43:53.705650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.349 [2024-07-15 20:43:53.705873] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.349 [2024-07-15 20:43:53.705881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.349 [2024-07-15 20:43:53.705889] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.349 [2024-07-15 20:43:53.709452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.349 [2024-07-15 20:43:53.718674] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.349 [2024-07-15 20:43:53.719331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.349 [2024-07-15 20:43:53.719368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.349 [2024-07-15 20:43:53.719380] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.349 [2024-07-15 20:43:53.719623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.349 [2024-07-15 20:43:53.719846] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.349 [2024-07-15 20:43:53.719855] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.349 [2024-07-15 20:43:53.719862] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.349 [2024-07-15 20:43:53.723423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.609 [2024-07-15 20:43:53.732643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.609 [2024-07-15 20:43:53.733223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.609 [2024-07-15 20:43:53.733251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.609 [2024-07-15 20:43:53.733259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.609 [2024-07-15 20:43:53.733480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.609 [2024-07-15 20:43:53.733699] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.609 [2024-07-15 20:43:53.733707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.609 [2024-07-15 20:43:53.733713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.609 [2024-07-15 20:43:53.737277] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.609 [2024-07-15 20:43:53.746492] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.609 [2024-07-15 20:43:53.747204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.609 [2024-07-15 20:43:53.747248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.609 [2024-07-15 20:43:53.747259] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.609 [2024-07-15 20:43:53.747498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.609 [2024-07-15 20:43:53.747721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.609 [2024-07-15 20:43:53.747730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.609 [2024-07-15 20:43:53.747738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.609 [2024-07-15 20:43:53.751298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.609 [2024-07-15 20:43:53.760315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.609 [2024-07-15 20:43:53.761041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.609 [2024-07-15 20:43:53.761077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.609 [2024-07-15 20:43:53.761088] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.609 [2024-07-15 20:43:53.761335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.609 [2024-07-15 20:43:53.761559] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.609 [2024-07-15 20:43:53.761568] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.609 [2024-07-15 20:43:53.761575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.609 [2024-07-15 20:43:53.765133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.609 [2024-07-15 20:43:53.774145] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.609 [2024-07-15 20:43:53.774858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.609 [2024-07-15 20:43:53.774896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.609 [2024-07-15 20:43:53.774907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.609 [2024-07-15 20:43:53.775146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.609 [2024-07-15 20:43:53.775382] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.609 [2024-07-15 20:43:53.775392] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.609 [2024-07-15 20:43:53.775399] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.778956] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.787968] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.788279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.788304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.788312] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.788538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.788759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.788767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.788774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.792333] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.801972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.802569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.802606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.802617] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.802857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.803080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.803088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.803096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.806658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.815884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.816578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.816615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.816626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.816866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.817089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.817097] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.817105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.820676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.829688] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.830282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.830308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.830317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.830541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.830762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.830770] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.830777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.834343] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.843561] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.844228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.844271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.844283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.844522] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.844746] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.844755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.844762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.848320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.857539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.858260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.858297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.858309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.858550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.858773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.858782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.858790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.862355] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.871367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.871930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.871967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.871982] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.872222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.872454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.872471] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.872479] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.876034] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.885256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.885854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.885872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.885880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.886100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.886325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.886335] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.886341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.889889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.899111] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.899803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.899840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.899851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.900090] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.900323] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.900332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.900340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.903895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.912915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.913620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.913657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.913668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.913907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.914131] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.914145] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.914153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.917718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.610 [2024-07-15 20:43:53.926732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.927510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.927548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.927560] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.927803] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.928027] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.928036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.928043] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.931607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.940628] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.941119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.941157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.941168] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.941419] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.941643] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.941652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.941660] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.945215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.954439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.955069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.955088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.955095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.955322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.955542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.955556] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.955563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.959117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.610 [2024-07-15 20:43:53.966574] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:01.610 [2024-07-15 20:43:53.968336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.969012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.969049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.969059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.969306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.969530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.969538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.969546] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.610 20:43:53 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.610 [2024-07-15 20:43:53.973099] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.610 [2024-07-15 20:43:53.982321] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.610 [2024-07-15 20:43:53.982800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.610 [2024-07-15 20:43:53.982837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.610 [2024-07-15 20:43:53.982848] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.610 [2024-07-15 20:43:53.983087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.610 [2024-07-15 20:43:53.983317] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.610 [2024-07-15 20:43:53.983327] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.610 [2024-07-15 20:43:53.983335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.610 [2024-07-15 20:43:53.986886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.870 [2024-07-15 20:43:53.996322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.870 [2024-07-15 20:43:53.996922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.870 [2024-07-15 20:43:53.996939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.870 [2024-07-15 20:43:53.996952] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.870 [2024-07-15 20:43:53.997172] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.870 [2024-07-15 20:43:53.997397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.870 [2024-07-15 20:43:53.997405] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.870 [2024-07-15 20:43:53.997412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.870 [2024-07-15 20:43:54.001007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.870 Malloc0 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.870 [2024-07-15 20:43:54.010225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.870 [2024-07-15 20:43:54.010819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.870 [2024-07-15 20:43:54.010835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.870 [2024-07-15 20:43:54.010842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.870 [2024-07-15 20:43:54.011062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.870 [2024-07-15 20:43:54.011286] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.870 [2024-07-15 20:43:54.011295] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.870 [2024-07-15 20:43:54.011302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.870 [2024-07-15 20:43:54.014852] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.870 [2024-07-15 20:43:54.024067] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.870 [2024-07-15 20:43:54.024757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.870 [2024-07-15 20:43:54.024795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x249c540 with addr=10.0.0.2, port=4420 00:30:01.870 [2024-07-15 20:43:54.024806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x249c540 is same with the state(5) to be set 00:30:01.870 [2024-07-15 20:43:54.025045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x249c540 (9): Bad file descriptor 00:30:01.870 [2024-07-15 20:43:54.025276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:01.870 [2024-07-15 20:43:54.025285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:01.870 [2024-07-15 20:43:54.025293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.870 [2024-07-15 20:43:54.028849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:01.870 [2024-07-15 20:43:54.034108] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:01.870 [2024-07-15 20:43:54.038080] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:01.870 20:43:54 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1531240 00:30:01.870 [2024-07-15 20:43:54.074777] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:11.876 00:30:11.876 Latency(us) 00:30:11.876 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.876 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:11.876 Verification LBA range: start 0x0 length 0x4000 00:30:11.876 Nvme1n1 : 15.00 8244.48 32.20 9647.01 0.00 7127.66 791.89 21408.43 00:30:11.876 =================================================================================================================== 00:30:11.876 Total : 8244.48 32.20 9647.01 0.00 7127.66 791.89 21408.43 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:11.876 rmmod nvme_tcp 00:30:11.876 rmmod nvme_fabrics 00:30:11.876 rmmod nvme_keyring 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1532311 ']' 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1532311 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1532311 ']' 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1532311 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1532311 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1532311' 00:30:11.876 killing process with pid 1532311 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1532311 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1532311 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:11.876 20:44:02 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:12.815 20:44:05 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:12.815 00:30:12.815 real 0m28.189s 00:30:12.815 user 1m3.010s 00:30:12.815 sys 0m7.420s 00:30:12.815 20:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:12.815 20:44:05 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:12.815 ************************************ 00:30:12.815 END TEST nvmf_bdevperf 00:30:12.815 ************************************ 00:30:12.815 20:44:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:12.815 20:44:05 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:12.815 20:44:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:12.815 20:44:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:12.815 20:44:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:12.815 ************************************ 00:30:12.815 START TEST nvmf_target_disconnect 00:30:12.815 ************************************ 00:30:12.815 20:44:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:13.111 * Looking for test storage... 00:30:13.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:30:13.111 20:44:05 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:21.296 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:21.296 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:21.296 Found net devices under 0000:31:00.0: cvl_0_0 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:21.296 Found net devices under 0000:31:00.1: cvl_0_1 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:21.296 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:21.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:21.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.678 ms 00:30:21.297 00:30:21.297 --- 10.0.0.2 ping statistics --- 00:30:21.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.297 rtt min/avg/max/mdev = 0.678/0.678/0.678/0.000 ms 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:21.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:21.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:30:21.297 00:30:21.297 --- 10.0.0.1 ping statistics --- 00:30:21.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:21.297 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:21.297 ************************************ 00:30:21.297 START TEST nvmf_target_disconnect_tc1 00:30:21.297 ************************************ 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:21.297 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.297 [2024-07-15 20:44:13.527592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:21.297 [2024-07-15 20:44:13.527664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa534b0 with addr=10.0.0.2, port=4420 00:30:21.297 [2024-07-15 20:44:13.527702] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:21.297 [2024-07-15 20:44:13.527719] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:21.297 [2024-07-15 20:44:13.527726] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:30:21.297 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:21.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:21.297 Initializing NVMe Controllers 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:21.297 00:30:21.297 real 0m0.121s 00:30:21.297 user 0m0.051s 00:30:21.297 sys 0m0.069s 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:21.297 ************************************ 00:30:21.297 END TEST nvmf_target_disconnect_tc1 00:30:21.297 ************************************ 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:21.297 ************************************ 00:30:21.297 START TEST nvmf_target_disconnect_tc2 00:30:21.297 ************************************ 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1538977 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1538977 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1538977 ']' 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:21.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.297 20:44:13 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:21.557 [2024-07-15 20:44:13.683458] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:30:21.557 [2024-07-15 20:44:13.683522] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:21.557 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.557 [2024-07-15 20:44:13.779523] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:21.557 [2024-07-15 20:44:13.873713] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:21.557 [2024-07-15 20:44:13.873774] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:21.557 [2024-07-15 20:44:13.873782] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:21.557 [2024-07-15 20:44:13.873789] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:21.557 [2024-07-15 20:44:13.873795] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:21.557 [2024-07-15 20:44:13.873965] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:21.557 [2024-07-15 20:44:13.874111] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:21.557 [2024-07-15 20:44:13.874291] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:21.557 [2024-07-15 20:44:13.874327] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:22.129 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:22.129 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:22.129 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:22.129 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:22.129 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.390 Malloc0 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.390 [2024-07-15 20:44:14.552351] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:22.390 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.391 [2024-07-15 20:44:14.592732] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1539109 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:22.391 20:44:14 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:22.391 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.306 20:44:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1538977 00:30:24.306 20:44:16 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Write completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 Read completed with error (sct=0, sc=8) 00:30:24.306 starting I/O failed 00:30:24.306 [2024-07-15 20:44:16.626427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:24.306 [2024-07-15 20:44:16.626852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.626871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.627235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.627246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.627787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.627824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.628068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.628081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.628471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.628511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.628886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.628900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.629154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.629164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.629606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.629642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.629955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.629968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.630462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.630497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.630769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.630781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.631157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.631167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.306 qpair failed and we were unable to recover it. 00:30:24.306 [2024-07-15 20:44:16.631407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.306 [2024-07-15 20:44:16.631418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.631750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.631760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.632136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.632147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.632502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.632513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.632880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.632890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.633242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.633253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.633655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.633666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.634025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.634035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.634396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.634406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.634711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.634721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.635034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.635044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.635354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.635364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.635727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.635738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.636086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.636097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.636342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.636353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.636722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.636732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.637086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.637096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.637451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.637461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.637672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.637684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.638031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.638041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.638355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.638365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.638740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.638749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.638922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.638931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.639281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.639291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.639611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.639621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.639981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.639990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.640192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.640201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.640404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.640415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.640743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.640753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.641071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.641081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.641337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.641347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.641695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.641705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.642012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.642023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.642414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.642423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.642763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.642773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.643147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.307 [2024-07-15 20:44:16.643157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.307 qpair failed and we were unable to recover it. 00:30:24.307 [2024-07-15 20:44:16.643521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.643530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.643890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.643900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.644260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.644270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.644508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.644519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.644861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.644871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.645190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.645200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.645553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.645563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.645881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.645891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.646181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.646190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.646554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.646565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.646883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.646893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.647212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.647221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.647588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.647599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.647964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.647973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.648197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.648206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.648563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.648573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.648767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.648776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.649119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.649128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.649474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.649487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.649838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.649849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.650204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.650216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.650578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.650591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.650960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.650972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.651282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.651295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.651640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.651652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.651950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.651962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.652342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.652354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.652680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.652692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.652947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.652959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.653194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.653206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.653590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.653602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.653907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.653918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.654196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.654208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.654602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.654614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.654998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.655010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.655415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.655427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.655731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.655746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.656066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.656078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.656304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.656316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.656675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.308 [2024-07-15 20:44:16.656687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.308 qpair failed and we were unable to recover it. 00:30:24.308 [2024-07-15 20:44:16.657002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.657013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.657290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.657301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.657630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.657643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.657947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.657959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.658314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.658326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.658680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.658692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.659053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.659065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.659355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.659367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.659692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.659704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.660022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.660034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.660465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.660477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.660710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.660721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.660922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.660934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.661286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.661298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.661675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.661691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.662047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.662063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.662418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.662434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.662765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.662781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.663104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.663121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.663456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.663472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.663677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.663693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.664094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.664111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.664353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.664370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.664586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.664603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.664952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.664968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.665318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.665338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.665653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.665669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.666048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.666064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.666293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.666308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.666658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.666674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.667040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.667055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.667384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.667401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.667721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.667737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.668113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.668128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.309 [2024-07-15 20:44:16.668483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.309 [2024-07-15 20:44:16.668500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.309 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.668712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.668728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.669118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.669137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.669418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.669434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.669793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.669809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.670196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.670212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.670428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.670444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.670761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.670777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.671134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.671149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.671483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.671500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.671751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.671767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.672105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.672125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.672554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.672575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.672929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.672949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.673223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.673249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.673400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.673422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.673680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.673703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.674073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.674094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.674530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.674551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.674978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.674998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.675332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.675353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.675711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.675731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.676085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.676105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.676482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.676502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.676900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.676920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.677163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.677185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.677521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.677542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.677908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.677929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.678188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.678207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.678583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.678604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.678952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.678972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.679292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.679313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.679739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.679758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.680056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.680076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.680456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.680477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.680700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.310 [2024-07-15 20:44:16.680719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.310 qpair failed and we were unable to recover it. 00:30:24.310 [2024-07-15 20:44:16.681001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.311 [2024-07-15 20:44:16.681020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.311 qpair failed and we were unable to recover it. 00:30:24.311 [2024-07-15 20:44:16.681299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.311 [2024-07-15 20:44:16.681319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.311 qpair failed and we were unable to recover it. 00:30:24.311 [2024-07-15 20:44:16.681650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.311 [2024-07-15 20:44:16.681669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.311 qpair failed and we were unable to recover it. 00:30:24.311 [2024-07-15 20:44:16.681912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.311 [2024-07-15 20:44:16.681934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.311 qpair failed and we were unable to recover it. 00:30:24.311 [2024-07-15 20:44:16.682330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.311 [2024-07-15 20:44:16.682352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.311 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.682736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.682758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.683008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.683033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.683412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.683434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.683830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.683850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.684177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.684197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.684615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.684635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.684870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.684890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.685156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.685176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.685576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.685596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.685960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.685980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.686304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.686324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.686666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.686686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.687146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.687166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.687533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.687554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.687982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.688009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.688286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.688314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.688692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.688719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.688995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.689022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.689410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.689439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.689837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.689864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.690297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.690325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.690723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.690750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.582 [2024-07-15 20:44:16.691194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.582 [2024-07-15 20:44:16.691222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.582 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.691687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.691715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.692102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.692129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.692521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.692550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.692951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.692978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.693382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.693409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.693781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.693816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.694191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.694219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.694614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.694641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.695017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.695045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.695464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.695491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.695762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.695798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.696151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.696178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.696581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.696610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.697024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.697052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.697328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.697357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.697757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.697784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.698155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.698183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.698595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.698623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.698965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.698998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.699252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.699281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.699660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.699687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.700081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.700108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.700368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.700396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.700783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.700810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.701219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.701257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.701428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.701458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.701770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.701797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.702163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.702190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.702703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.702732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.703001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.703028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.703415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.703444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.583 [2024-07-15 20:44:16.703800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.583 [2024-07-15 20:44:16.703828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.583 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.704199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.704226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.704597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.704625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.704935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.704962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.705341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.705369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.705660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.705687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.706035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.706063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.706558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.706587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.706834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.706861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.707128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.707156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.707543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.707573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.707942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.707970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.708333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.708361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.708667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.708697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.709035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.709064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.709317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.709347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.709743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.709771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.710155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.710183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.710477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.710506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.710894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.710921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.711182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.711218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.711585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.711615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.711968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.711995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.712317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.712347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.712713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.712740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.713121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.713148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.713603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.713631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.713905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.713933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.714135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.714162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.714488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.714516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.714882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.714909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.715289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.715317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.715672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.584 [2024-07-15 20:44:16.715700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.584 qpair failed and we were unable to recover it. 00:30:24.584 [2024-07-15 20:44:16.716029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.716057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.716497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.716525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.716946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.716973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.717360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.717389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.717762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.717790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.718164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.718191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.718682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.718710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.719088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.719115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.719533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.719562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.719887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.719915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.720296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.720324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.720580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.720609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.720996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.721024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.721395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.721423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.721774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.721802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.722180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.722208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.722589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.722617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.722966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.722995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.723377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.723406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.723802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.723830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.724245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.724274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.724657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.724690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.725139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.725166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.725586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.725615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.725999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.726027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.726401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.726429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.726804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.726832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.727180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.727208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.727595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.727622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.728018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.728045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.728456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.728485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.728855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.728883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.729262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.585 [2024-07-15 20:44:16.729290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.585 qpair failed and we were unable to recover it. 00:30:24.585 [2024-07-15 20:44:16.729671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.729699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.730072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.730100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.730490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.730519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.730905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.730932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.731312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.731340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.731601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.731628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.732014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.732041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.732413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.732441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.732785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.732814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.733200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.733227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.733604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.733633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.734011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.734038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.734427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.734456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.734837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.734864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.735258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.735286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.735554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.735583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.735962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.735991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.736375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.736404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.736784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.736811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.737186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.737214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.737605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.737633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.738021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.738048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.738423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.738452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.738842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.738870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.739252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.739280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.739681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.739708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.740073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.740100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.740458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.740486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.740853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.740887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.741263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.741291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.741561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.741591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.741967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.741995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.742371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.742401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.742779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.742807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.743186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.743214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.743602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.743631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.744014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.744041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.744423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.744451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.744829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.744856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.586 [2024-07-15 20:44:16.745246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.586 [2024-07-15 20:44:16.745274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.586 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.745542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.745572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.745957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.745985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.746372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.746402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.746672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.746700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.747083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.747112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.747468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.747496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.747850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.747878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.748310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.748338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.748706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.748733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.748990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.749019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.749383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.749411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.749759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.749786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.750083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.750109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.750376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.750406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.750791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.750819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.751138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.751165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.751431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.751462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.751839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.751866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.752253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.752282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.752672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.752699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.753101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.753128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.753492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.753520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.753759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.753786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.754183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.754210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.754536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.754563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.754968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.754996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.755227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.755275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.755601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.755629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.756005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.756037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.756453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.756483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.756748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.756777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.757164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.757192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.757468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.757497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.757879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.757906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.758274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.758301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.758713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.587 [2024-07-15 20:44:16.758740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.587 qpair failed and we were unable to recover it. 00:30:24.587 [2024-07-15 20:44:16.759114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.759141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.759401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.759432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.759789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.759817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.760193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.760219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.760608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.760636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.761037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.761065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.761429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.761458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.761843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.761871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.762245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.762273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.762541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.762571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.762853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.762882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.763142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.763171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.763527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.763556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.763915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.763943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.764340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.764369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.764756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.764783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.765169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.765197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.765565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.765594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.765847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.765874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.766128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.766160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.766518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.766546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.766900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.766927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.767305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.767334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.767713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.767740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.768001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.768027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.768415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.768444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.768798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.768826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.769203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.769239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.769626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.769654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.770028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.770054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.770507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.770536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.770942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.770969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.771366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.771401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.771775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.771802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.772181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.772209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.772553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.772581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.772936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.772964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.773350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.773378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.773756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.773784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.774157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.774184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.774478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.774505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.774897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.774925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.775280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.775308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.775694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.775722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.776090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.776117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.776503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.776532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.776919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.776946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.777357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.588 [2024-07-15 20:44:16.777386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.588 qpair failed and we were unable to recover it. 00:30:24.588 [2024-07-15 20:44:16.777798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.777825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.778109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.778136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.778517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.778546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.778918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.778946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.779291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.779319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.779670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.779697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.780101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.780130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.780531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.780561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.780829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.780856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.781129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.781156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.781532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.781560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.781955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.781983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.782361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.782390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.782787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.782814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.783177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.783204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.783650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.783679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.784056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.784083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.784461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.784490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.784865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.784893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.785365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.785393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.785721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.785748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.786077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.786105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.786533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.786561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.786832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.786859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.787271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.787304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.787626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.787661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.787936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.787963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.788325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.788354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.788731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.788759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.789144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.789172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.789554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.789582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.789974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.790002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.790382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.790411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.790762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.790789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.791158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.791185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.791636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.791664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.792044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.792071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.792468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.792496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.792844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.792871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.793259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.793288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.793572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.793602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.793879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.793909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.794263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.794292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.794689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.794718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.795089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.795117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.795495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.795524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.795964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.795991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.589 qpair failed and we were unable to recover it. 00:30:24.589 [2024-07-15 20:44:16.796421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.589 [2024-07-15 20:44:16.796450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.796814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.796842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.797246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.797275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.797556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.797586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.797849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.797880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.798255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.798285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.798668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.798695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.799024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.799051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.799489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.799518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.799799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.799826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.800190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.800217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.800582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.800610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.801003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.801031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.801298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.801328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.801733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.801761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.802150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.802178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.802562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.802591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.802984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.803017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.803392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.803420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.803792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.803819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.804214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.804250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.804636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.804664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.805069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.805097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.805249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.805279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.805663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.805691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.806066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.806094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.806458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.806486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.806839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.806867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.807277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.807306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.807695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.807722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.808108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.808135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.808499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.808528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.808909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.808937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.809393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.809422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.809863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.809890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.810145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.810172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.810566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.810595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.810949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.810976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.811359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.811387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.811692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.811719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.812087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.812114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.812520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.812548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.812920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.812947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.813327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.813356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.813746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.813773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.814137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.814164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.590 qpair failed and we were unable to recover it. 00:30:24.590 [2024-07-15 20:44:16.814546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.590 [2024-07-15 20:44:16.814575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.814911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.814937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.815313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.815341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.815698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.815725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.816120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.816148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.816422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.816449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.816838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.816867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.817250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.817280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.817676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.817705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.818077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.818105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.818362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.818390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.818741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.818775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.819095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.819123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.821307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.821365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.821745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.821775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.823380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.823430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.823835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.823867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.825466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.825513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.825917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.825949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.827484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.827531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.827919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.827951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.830030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.830083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.830528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.830559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.831917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.831962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.832349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.832378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.591 qpair failed and we were unable to recover it. 00:30:24.591 [2024-07-15 20:44:16.833709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.591 [2024-07-15 20:44:16.833749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.834146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.834174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.834527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.834556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.834940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.834965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.835317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.835343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.835715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.835742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.836105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.836131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.837894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.837941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.838331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.838361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.839809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.839852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.840251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.840280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.840691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.840717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.841063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.841086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.841450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.841474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.841853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.841877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.843314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.843356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.843740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.843767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.844181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.844204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.844466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.844492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.844875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.844899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.845283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.845307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.845745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.845768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.846006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.846031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.846415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.846440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.846822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.846845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.847221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.847287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.847656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.847686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.848057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.848080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.848467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.848491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.849817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.849857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.850253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.850279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.851286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.851326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.851731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.851760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.852126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.852151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.852526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.852551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.852928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.852951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.853352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.853376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.853758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.853787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.854244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.854274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.854654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.854683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.855031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.855060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.855444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.855474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.855840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.855869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.856123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.856154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.856507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.856538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.856913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.856941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.857355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.857384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.857759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.592 [2024-07-15 20:44:16.857788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.592 qpair failed and we were unable to recover it. 00:30:24.592 [2024-07-15 20:44:16.858182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.858211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.859290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.859337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.859749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.859780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.860157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.860190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.860595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.860625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.861001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.861032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.861401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.861431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.861740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.861768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.862110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.862138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.862510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.862538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.862793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.862824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.863174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.863203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.863818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.863848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.864182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.864210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.864529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.864558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.864924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.864952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.865320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.865350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.865642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.865671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.866042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.866077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.866527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.866559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.866830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.866863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.867273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.867302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.867684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.867713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.867991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.868019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.868407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.868435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.868821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.868849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.869224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.869263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.869693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.869722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.869880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.869910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.870327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.870357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.870739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.870768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.871143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.871172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.871598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.871630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.872042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.872072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.872452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.872482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.872859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.872888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.873272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.873300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.873712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.873741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.874100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.874128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.874413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.874442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.874824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.874852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.875251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.875281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.875654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.875682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.876058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.876087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.876464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.876493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.876880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.876910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.877264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.593 [2024-07-15 20:44:16.877292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.593 qpair failed and we were unable to recover it. 00:30:24.593 [2024-07-15 20:44:16.877567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.877599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.877993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.878022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.878307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.878335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.878714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.878743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.879000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.879028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.879190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.879220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.879636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.879668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.880023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.880052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.880422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.880452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.880808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.880837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.881236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.881266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.881643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.881678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.882056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.882084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.882453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.882483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.882862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.882890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.883151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.883179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.883595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.883625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.884032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.884061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.884443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.884472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.884841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.884869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.885242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.885269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.885633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.885662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.886054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.886082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.886452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.886482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.886822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.886850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.887264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.887295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.887619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.887649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.888012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.888040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.888342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.888373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.888662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.888692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.889004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.889034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.889416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.889446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.889806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.889834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.890225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.890263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.890686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.890714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.891063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.891091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.891489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.891519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.891900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.891929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.892329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.892359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.892591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.892618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.892984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.893012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.893373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.893402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.893754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.893784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.894054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.894082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.894448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.894478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.894749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.894778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.895129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.594 [2024-07-15 20:44:16.895157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.594 qpair failed and we were unable to recover it. 00:30:24.594 [2024-07-15 20:44:16.895519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.895549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.895935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.895963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.896350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.896380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.896737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.896766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.897100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.897134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.897504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.897534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.897921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.897949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.898357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.898386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.898755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.898784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.899076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.899105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.899424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.899454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.899838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.899866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.900236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.900266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.900536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.900564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.900951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.900979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.901377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.901406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.901772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.901801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.902170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.902199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.902497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.902529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.902932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.902961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.903217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.903261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.903559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.903587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.903982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.904010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.904279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.904309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.904700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.904729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.905090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.905119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.905477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.905506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.905894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.905923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.906288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.906318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.906648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.906677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.906929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.906961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.907149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.907179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.907542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.907572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.907939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.907968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.908343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.908372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.908721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.908749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.909116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.909145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.909508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.909538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.909938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.909967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.910274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.910304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.910608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.910636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.595 [2024-07-15 20:44:16.911065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.595 [2024-07-15 20:44:16.911094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.595 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.911484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.911514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.911898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.911927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.912291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.912327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.912689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.912718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.913133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.913160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.913535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.913564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.913797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.913825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.914187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.914216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.914583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.914612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.914965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.914994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.915218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.915257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.915558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.915587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.915831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.915858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.916254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.916285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.916651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.916680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.917054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.917083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.917450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.917479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.917821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.917850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.918217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.918252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.918689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.918717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.919072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.919101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.919473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.919504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.919868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.919896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.920261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.920292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.920714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.920742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.920897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.920927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.921319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.921349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.921733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.921761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.922133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.922163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.922477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.922508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.922631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.922658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.923048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.923077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.923386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.923415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.923790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.923818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.924208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.924245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.924604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.924632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.925028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.925056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.925449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.925479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.925713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.925744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.926170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.926199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.926529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.926558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.926808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.926835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.927126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.927155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.927536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.927569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.927951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.927980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.928286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.928315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.596 [2024-07-15 20:44:16.928663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.596 [2024-07-15 20:44:16.928691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.596 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.928941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.928968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.929336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.929366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.929769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.929798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.930063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.930092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.930400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.930429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.930824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.930852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.931243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.931272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.931652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.931681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.931921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.931949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.932336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.932366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.932722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.932750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.933008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.933036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.933280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.933310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.933708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.933736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.934072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.934102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.934465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.934495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.934854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.934882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.935276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.935306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.935696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.935725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.936072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.936100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.936546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.936575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.936945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.936973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.937323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.937357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.937713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.937742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.938099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.938128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.938488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.938517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.938866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.938895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.939253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.939282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.939641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.939670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.940054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.940082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.940458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.940486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.940745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.940773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.941147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.941176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.941581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.941610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.941996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.942024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.942399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.942429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.942792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.942821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.943202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.943240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.943619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.943648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.944013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.944042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.944405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.944435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.944691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.944723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.944986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.945015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.945271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.945300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.945697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.945726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.945977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.946007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.946375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.597 [2024-07-15 20:44:16.946405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.597 qpair failed and we were unable to recover it. 00:30:24.597 [2024-07-15 20:44:16.946750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.946779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.947159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.947188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.947602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.947631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.947946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.947974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.948190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.948222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.948612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.948641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.949000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.949029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.949399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.949428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.949773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.949800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.598 [2024-07-15 20:44:16.950165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.598 [2024-07-15 20:44:16.950193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.598 qpair failed and we were unable to recover it. 00:30:24.869 [2024-07-15 20:44:16.950604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.869 [2024-07-15 20:44:16.950634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.869 qpair failed and we were unable to recover it. 00:30:24.869 [2024-07-15 20:44:16.950992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.869 [2024-07-15 20:44:16.951021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.869 qpair failed and we were unable to recover it. 00:30:24.869 [2024-07-15 20:44:16.951392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.869 [2024-07-15 20:44:16.951422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.869 qpair failed and we were unable to recover it. 00:30:24.869 [2024-07-15 20:44:16.951735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.951764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.952015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.952047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.952329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.952365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.952740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.952770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.953209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.953248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.953633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.953661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.954049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.954077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.954326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.954354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.954735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.954763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.955145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.955173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.955560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.955590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.955959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.955988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.956272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.956303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.956677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.956706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.957087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.957115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.957497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.957527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.957896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.957925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.958315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.958345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.958606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.958640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.959007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.959037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.959417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.959448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.959788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.959817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.960079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.960110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.960492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.960522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.960883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.960913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.961257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.961286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.961709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.961737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.962075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.962103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.962359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.962391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.962773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.962803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.963191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.963220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.963616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.963645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.964017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.964046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.964421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.964450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.964834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.964863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.965243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.965273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.965666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.965695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.966080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.966109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.966554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.966583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.966955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.966983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.870 [2024-07-15 20:44:16.967358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.870 [2024-07-15 20:44:16.967388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.870 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.967787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.967815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.968182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.968216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.968664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.968694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.969066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.969097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.969450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.969481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.969875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.969904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.970279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.970309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.970723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.970752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.971151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.971180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.971584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.971614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.971996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.972025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.972286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.972314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.972723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.972752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.973148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.973176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.973543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.973573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.973926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.973956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.974336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.974366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.974760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.974790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.975162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.975189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.975548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.975578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.975962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.975991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.976262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.976293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.976715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.976744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.977115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.977143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.977428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.977456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.977719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.977750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.978046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.978075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.978439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.978469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.978848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.978877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.979261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.979293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.979565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.979594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.979970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.979999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.980383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.980413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.980820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.980849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.981221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.981258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.981656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.981685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.982074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.982105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.982500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.982531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.982895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.982924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.983305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.983335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.871 [2024-07-15 20:44:16.983718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.871 [2024-07-15 20:44:16.983746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.871 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.984015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.984050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.984303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.984337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.984704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.984733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.985119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.985147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.985514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.985545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.985901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.985930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.986313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.986343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.986606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.986634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.986968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.986997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.987412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.987442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.987813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.987843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.988239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.988270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.988692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.988721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.989085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.989114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.989502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.989533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.989916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.989945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.990246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.990277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.990635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.990664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.991050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.991079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.991465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.991496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.991879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.991907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.992248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.992278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.992650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.992679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.993103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.993132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.993497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.993526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.993888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.993917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.994289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.994319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.994755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.994785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.995178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.995208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.995693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.995723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.995993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.996023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.996421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.996452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.996842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.996870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.997215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.997252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.997609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.997637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.998031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.998060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.998436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.998466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.998865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.998893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.999272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.999303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:16.999707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.872 [2024-07-15 20:44:16.999736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.872 qpair failed and we were unable to recover it. 00:30:24.872 [2024-07-15 20:44:17.000123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.000159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.000530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.000560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.000931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.000960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.001354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.001384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.001762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.001792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.002165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.002194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.002482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.002511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.002880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.002909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.003263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.003293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.003671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.003699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.004042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.004072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.004457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.004487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.004878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.004907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.005335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.005365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.005755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.005784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.006124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.006154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.006546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.006576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.006921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.006950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.007315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.007347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.007723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.007753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.008124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.008154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.008551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.008582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.008851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.008882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.009282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.009311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.009732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.009761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.010013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.010043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.010416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.010445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.010842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.010872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.011254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.011284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.011675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.011704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.011897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.011927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.012305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.012336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.012721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.012750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.013128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.013158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.013594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.013623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.014016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.014046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.014420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.014450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.014865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.014895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.015259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.015289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.015688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.015718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.016106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.873 [2024-07-15 20:44:17.016141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.873 qpair failed and we were unable to recover it. 00:30:24.873 [2024-07-15 20:44:17.016292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.016326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.016730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.016762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.017157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.017186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.017568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.017599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.017969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.017998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.018303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.018331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.018682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.018711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.019107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.019137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.019416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.019447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.019816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.019845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.020096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.020126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.020531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.020562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.020936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.020966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.021340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.021370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.021770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.021799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.022186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.022215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.022617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.022646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.023007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.023036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.023495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.023527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.023761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.023792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.024170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.024200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.024595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.024627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.025032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.025062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.025452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.025483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.025852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.025881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.026136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.026167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.026552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.026584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.026975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.027004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.027386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.027417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.027813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.027843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.028238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.028270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.028652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.028680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.029060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.029089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.029534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.874 [2024-07-15 20:44:17.029565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.874 qpair failed and we were unable to recover it. 00:30:24.874 [2024-07-15 20:44:17.029954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.029983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.030334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.030364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.030643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.030675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.031044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.031074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.031438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.031468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.031857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.031892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.032264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.032294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.032698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.032728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.033085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.033115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.033489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.033520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.033871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.033900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.034276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.034306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.034727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.034755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.035142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.035171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.035615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.035646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.036016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.036045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.036424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.036455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.036855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.036887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.037258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.037289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.037678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.037709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.038108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.038138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.038399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.038429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.038758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.038789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.039030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.039059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.039439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.039472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.039864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.039895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.040265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.040296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.040684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.040713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.041049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.041079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.041460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.041491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.041949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.041978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.042347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.042377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.042790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.042821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.043086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.043114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.043495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.043525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.043908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.043939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.044293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.044323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.044715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.044745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.044996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.045025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.045411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.045443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.875 qpair failed and we were unable to recover it. 00:30:24.875 [2024-07-15 20:44:17.045831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.875 [2024-07-15 20:44:17.045861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.046272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.046303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.046720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.046749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.047126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.047155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.047521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.047552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.047944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.047979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.048392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.048423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.048872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.048902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.049289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.049321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.049664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.049695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.050096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.050127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.050517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.050547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.050967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.050997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.051393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.051423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.051807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.051838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.052108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.052139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.052507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.052537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.052921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.052951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.053325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.053357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.053647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.053680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.054071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.054101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.054403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.054432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.054822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.054851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.055228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.055295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.055685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.055716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.056102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.056131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.056511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.056542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.056906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.056937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.057325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.057357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.057778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.057807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.058194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.058223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.058605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.058637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.059023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.059053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.059437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.059468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.059847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.059876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.060249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.060280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.060638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.060667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.061064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.061093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.061450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.061481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.061904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.061934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.062324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.062353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.876 qpair failed and we were unable to recover it. 00:30:24.876 [2024-07-15 20:44:17.062738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.876 [2024-07-15 20:44:17.062768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.063144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.063175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.063605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.063635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.064023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.064053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.064401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.064437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.064824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.064853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.065215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.065254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.065670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.065699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.066086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.066116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.066506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.066537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.066911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.066941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.067338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.067369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.067625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.067656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.068033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.068063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.068442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.068472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.068859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.068889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.069282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.069311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.069699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.069728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.070000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.070029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.070414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.070445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.070840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.070871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.071263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.071294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.071667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.071696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.072094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.072124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.072492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.072522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.072896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.072925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.073305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.073336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.073719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.073748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.074012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.074043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.074426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.074457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.074832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.074860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.075195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.075227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.075655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.075685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.076136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.076166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.076547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.076578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.076979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.077008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.077393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.077423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.077801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.077831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.078209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.078248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.078642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.078671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.079064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.877 [2024-07-15 20:44:17.079094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.877 qpair failed and we were unable to recover it. 00:30:24.877 [2024-07-15 20:44:17.079489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.079520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.079902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.079931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.080344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.080376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.080775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.080811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.081176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.081206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.081541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.081574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.081979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.082008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.082394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.082425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.082804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.082833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.083283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.083315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.083704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.083734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.084015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.084046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.084425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.084456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.084821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.084851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.085212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.085251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.085684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.085715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.086090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.086120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.086518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.086548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.086933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.086965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.087354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.087384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.087766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.087796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.088171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.088200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.088587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.088618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.089009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.089039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.089416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.089446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.089826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.089855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.090252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.090283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.090693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.090722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.091106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.091137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.091517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.091548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.091937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.091969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.092358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.092388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.092764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.092794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.093168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.093198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.093593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.093624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.094012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.878 [2024-07-15 20:44:17.094042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.878 qpair failed and we were unable to recover it. 00:30:24.878 [2024-07-15 20:44:17.094420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.094450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.094826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.094857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.095265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.095311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.095738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.095767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.096149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.096180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.096543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.096574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.096964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.096993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.097381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.097418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.097761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.097792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.098161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.098191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.098581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.098611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.099004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.099035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.099314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.099349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.099717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.099747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.100153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.100183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.100572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.100603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.100976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.101007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.101271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.101304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.101701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.101733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.102068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.102098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.102535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.102565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.102947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.102977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.103239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.103271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.103733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.103763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.104137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.104166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.104553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.104583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.104985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.105015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.105404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.105436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.105806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.105836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.106208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.106248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.106630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.106662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.107063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.107093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.107492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.107523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.107897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.107926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.108333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.108365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.108782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.108811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.109212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.109250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.109627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.109657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.110053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.110083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.110492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.110522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.110865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.879 [2024-07-15 20:44:17.110896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.879 qpair failed and we were unable to recover it. 00:30:24.879 [2024-07-15 20:44:17.111154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.111185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.111589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.111619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.112010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.112039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.112396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.112428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.112801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.112831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.113119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.113150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.113431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.113468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.113842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.113873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.114251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.114283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.114587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.114617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.114975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.115005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.115386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.115418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.115685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.115715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.115997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.116028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.116425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.116456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.116717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.116746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.117200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.117237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.117617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.117648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.117895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.117926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.118193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.118223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.118613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.118645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.118993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.119024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.119433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.119466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.119842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.119873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.120257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.120288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.120709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.120738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.121139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.121169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.121529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.121559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.121937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.121968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.122361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.122392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.122682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.122714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.123092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.123122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.123524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.123555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.123948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.123979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.124227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.124267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.124634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.124663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.125044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.125075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.125473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.125506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.125758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.125786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.126137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.126165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.880 [2024-07-15 20:44:17.126534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.880 [2024-07-15 20:44:17.126566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.880 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.126953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.126984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.127338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.127370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.127766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.127797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.128191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.128222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.128600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.128632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.129055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.129085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.129370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.129401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.129814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.129843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.130249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.130280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.130698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.130728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.131118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.131148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.131513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.131544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.131934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.131964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.132320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.132352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.132701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.132731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.133100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.133132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.133499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.133531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.133887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.133918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.134298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.134328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.134726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.134756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.135108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.135139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.135512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.135543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.135969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.135999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.136374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.136404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.136806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.136835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.137148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.137180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.137585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.137615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.137990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.138020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.138398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.138428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.138768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.138799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.139094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.139123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.139534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.139565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.139879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.139915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.140289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.140320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.140602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.140632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.141010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.141039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.141447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.141478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.141836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.141866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.142264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.142295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.142701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.142731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.881 qpair failed and we were unable to recover it. 00:30:24.881 [2024-07-15 20:44:17.143059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.881 [2024-07-15 20:44:17.143088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.143456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.143487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.143876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.143905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.144182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.144211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.144616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.144647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.145059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.145093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.145453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.145485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.145750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.145783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.146218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.146258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.146509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.146541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.146822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.146853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.147227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.147269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.147699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.147729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.148120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.148152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.148544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.148577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.148960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.148989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.149328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.149358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.149731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.149761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.150150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.150179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.150441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.150471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.150868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.150898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.151307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.151339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.151729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.151759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.152140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.152170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.152582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.152614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.153023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.153053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.153434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.153467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.153855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.153885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.154269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.154301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.154705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.154734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.155108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.155140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.155538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.155569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.155927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.155962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.156365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.156396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.156784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.156814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.157197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.157226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.882 [2024-07-15 20:44:17.157634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.882 [2024-07-15 20:44:17.157664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.882 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.157873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.157901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.158291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.158322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.158705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.158735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.159109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.159137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.159529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.159559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.159938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.159970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.160238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.160272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.160625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.160655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.161048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.161078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.161473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.161503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.161879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.161911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.162310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.162341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.162701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.162732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.163135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.163165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.163585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.163616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.164017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.164048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.164453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.164485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.164887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.164917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.165294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.165327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.165660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.165690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.166081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.166111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.166469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.166503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.166749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.166779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.167179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.167211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.167600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.167631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.168013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.168043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.168442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.168472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.168822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.168851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.169217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.169253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.169663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.169692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.170086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.170115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.170514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.170545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.170956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.170987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.171264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.171293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.171656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.171686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.172082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.172123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.172504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.172535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.172995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.173026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.173394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.173425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.173721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.883 [2024-07-15 20:44:17.173756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.883 qpair failed and we were unable to recover it. 00:30:24.883 [2024-07-15 20:44:17.174155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.174186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.174562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.174593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.174969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.174998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.175258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.175288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.175670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.175699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.176065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.176096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.176495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.176526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.176891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.176922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.177087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.177119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.177511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.177543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.177923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.177953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.178334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.178364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.178764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.178794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.179040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.179069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.179456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.179488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.179884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.179915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.180325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.180356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.180768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.180799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.181177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.181207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.181605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.181636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.182027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.182058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.182445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.182476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.182852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.182883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.183283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.183315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.183609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.183640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.184014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.184045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.184384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.184415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.184785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.184814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.185218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.185256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.185646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.185675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.186048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.186077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.186465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.186497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.186895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.186925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.187201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.187242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.187697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.187727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.188127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.188162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.188570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.188602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.188988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.189018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.189409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.189440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.189841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.189871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.190151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.884 [2024-07-15 20:44:17.190179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.884 qpair failed and we were unable to recover it. 00:30:24.884 [2024-07-15 20:44:17.190560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.190591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.190976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.191008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.191389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.191421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.191819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.191851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.192213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.192253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.192640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.192670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.193069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.193098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.193489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.193522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.193900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.193933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.194315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.194348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.194697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.194734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.195099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.195131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.195492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.195523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.195975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.196005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.196280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.196312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.196714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.196745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.197129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.197159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.197547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.197578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.197984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.198014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.198426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.198456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.198848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.198878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.199275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.199320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.199739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.199770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.200170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.200200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.200507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.200538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.200971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.201002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.201403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.201434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.201847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.201877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.202239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.202270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.202556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.202588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.202985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.203016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.203462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.203493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.203877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.203907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.204292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.204326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.204723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.204760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.205165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.205196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.205616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.205648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.206031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.206060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.206464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.206495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.206857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.206888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.207283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.885 [2024-07-15 20:44:17.207313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.885 qpair failed and we were unable to recover it. 00:30:24.885 [2024-07-15 20:44:17.207712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.207742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.208132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.208162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.208528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.208561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.208953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.208984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.209367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.209398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.209795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.209826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.210185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.210215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.210644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.210675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.211067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.211097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.211496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.211527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.211929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.211960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.212390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.212421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.212803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.212833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.213245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.213276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.213720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.213750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.214127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.214157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.214545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.214577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.214975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.215006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.215404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.215435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.215813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.215843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.216228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.216280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.216693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.216723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.217118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.217148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.217411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.217441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.217834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.217865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.218248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.218278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.218683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.218713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.219101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.219132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.219529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.219562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.219957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.219988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.220393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.220424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.220815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.220846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.221245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.221277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.221694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.221729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.222167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.222197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.886 [2024-07-15 20:44:17.222569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.886 [2024-07-15 20:44:17.222601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.886 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.222988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.223020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.223390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.223420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.223853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.223882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.224279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.224310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.224705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.224735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.225092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.225122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.225517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.225550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.225930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.225959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.226391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.226421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.226818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.226850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.227272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.227303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.227708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.227739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.228124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.228154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.228557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.228589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.228992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.229022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.229293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.229323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.229692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.229721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.230127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.230158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.230524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.230555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.230939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.230969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.231346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.231380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.231662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.231696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.232099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.232131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.232411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.232443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.232863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.232895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.233285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.233316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.233717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.233747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.234125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.234156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.234546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.234576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.234981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.235013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.235383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.235414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.235797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.235826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.236206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.236253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.236640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.236671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.237071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.237101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.237504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.237535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:24.887 [2024-07-15 20:44:17.237916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.887 [2024-07-15 20:44:17.237946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:24.887 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.238360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.238402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.238682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.238717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.238991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.239027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.239447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.239481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.239879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.239910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.240311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.240344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.240784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.240818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.241207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.241251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.241629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.241660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.159 qpair failed and we were unable to recover it. 00:30:25.159 [2024-07-15 20:44:17.242058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.159 [2024-07-15 20:44:17.242088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.242452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.242483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.242880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.242911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.243188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.243220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.243625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.243656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.243921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.243955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.244342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.244373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.244797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.244828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.245264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.245298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.245686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.245718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.246099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.246129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.246411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.246443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.246845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.246874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.247263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.247293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.247688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.247719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.247932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.247965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.248392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.248423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.248687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.248718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.249148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.249180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.249577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.249609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.249972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.250003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.250414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.250445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.250829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.250865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.251255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.251286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.251674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.251703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.252089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.252120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.252510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.252541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.252939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.252969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.253337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.253370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.253757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.253788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.254178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.254208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.254569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.254614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.255039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.255070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.255280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.255310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.255718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.255749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.256158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.256188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.256597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.256628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.257028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.257062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.257451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.257483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.257877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.257909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.258202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.160 [2024-07-15 20:44:17.258262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.160 qpair failed and we were unable to recover it. 00:30:25.160 [2024-07-15 20:44:17.258675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.258706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.259084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.259114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.259493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.259523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.259944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.259973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.260225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.260268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.260663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.260695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.261127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.261157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.261522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.261554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.261929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.261960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.262335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.262366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.262725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.262755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.263165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.263195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.263592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.263623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.264021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.264052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.264425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.264456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.264862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.264898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.265285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.265317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.265709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.265742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.266140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.266171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.266464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.266495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.266883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.266914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.267299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.267330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.267728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.267758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.268161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.268193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.268595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.268626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.269013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.269044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.269452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.269482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.269847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.269878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.270268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.270299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.270702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.270733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.271133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.271171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.271452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.271485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.271863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.271895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.272263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.272295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.272708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.272738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.273138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.273169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.273555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.273588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.273968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.274000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.274398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.274429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.274842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.274872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.161 [2024-07-15 20:44:17.275264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.161 [2024-07-15 20:44:17.275295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.161 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.275702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.275732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.276133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.276162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.276584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.276615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.277010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.277039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.277302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.277334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.277745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.277775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.278143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.278174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.278629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.278662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.279120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.279152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.279518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.279549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.279909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.279940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.280296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.280327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.280729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.280759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.281161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.281193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.281593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.281624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.282014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.282045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.282433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.282468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.282859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.282890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.283297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.283329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.283682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.283713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.284119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.284150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.284574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.284604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.285008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.285038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.285426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.285457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.285916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.285947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.286249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.286280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.286700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.286731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.287104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.287134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.287525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.287556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.287954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.287990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.288354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.288386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.288733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.288763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.289156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.289186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.289577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.289608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.290010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.290045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.290433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.290465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.290846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.290876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.291277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.291309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.291577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.291605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.291996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.292025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.162 qpair failed and we were unable to recover it. 00:30:25.162 [2024-07-15 20:44:17.292382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.162 [2024-07-15 20:44:17.292415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.292802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.292833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.293200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.293237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.293513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.293546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.293977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.294007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.294418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.294449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.294843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.294872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.295251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.295283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.295737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.295766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.296168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.296198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.296601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.296633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.297022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.297053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.297561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.297661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.298074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.298112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.298497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.298531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.298914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.298945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.299343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.299377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.299746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.299778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.300213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.300253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.300542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.300571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.300961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.300991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.301345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.301377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.301772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.301804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.302189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.302219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.302620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.302651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.303051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.303080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.303534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.303565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.303954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.303983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.304373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.304404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.304805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.304841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.305112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.305145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.305542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.305574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.305963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.305993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.306403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.306434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.306853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.306882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.307270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.307302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.307697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.307728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.163 [2024-07-15 20:44:17.308129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.163 [2024-07-15 20:44:17.308158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.163 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.308524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.308556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.308945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.308975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.309251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.309282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.309667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.309697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.310106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.310137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.310528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.310558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.310947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.310979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.311373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.311405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.311814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.311844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.312250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.312281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.312589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.312618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.313016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.313046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.313445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.313478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.313744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.313774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.314161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.314190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.314603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.314633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.314982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.315013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.315415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.315446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.315850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.315881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.316287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.316319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.316720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.316750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.317141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.317170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.317546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.317577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.317973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.318004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.318375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.318407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.318858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.318889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.319272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.319304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.319711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.319741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.320144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.320173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.320634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.320665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.321125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.321156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.321526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.321563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.321955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.321985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.322350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.322379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.322790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.322820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.164 [2024-07-15 20:44:17.323220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.164 [2024-07-15 20:44:17.323263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.164 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.323713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.323743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.324101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.324130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.324521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.324552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.324952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.324982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.325381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.325412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.325798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.325828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.326219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.326256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.326627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.326658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.327057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.327087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.327500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.327530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.327912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.327944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.328345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.328376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.328681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.328712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.329111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.329140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.329537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.329569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.329939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.329968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.330367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.330397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.330796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.330826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.331212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.331251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.331616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.331647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.332051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.332081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.332453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.332484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.332873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.332903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.333301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.333332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.333733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.333763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.334153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.334183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.334555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.334586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.334995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.335024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.335428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.335459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.335873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.335903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.336292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.336323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.336735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.336765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.337169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.337199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.337631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.337663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.338045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.338076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.338496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.338527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.338901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.338932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.339323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.339354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.339783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.339813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.340180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.340210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.165 [2024-07-15 20:44:17.340596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.165 [2024-07-15 20:44:17.340627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.165 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.341016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.341046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.341304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.341334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.341757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.341786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.342178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.342207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.342549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.342582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.342946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.342976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.343379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.343410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.343819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.343849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.344116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.344147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.344583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.344612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.345006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.345035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.345439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.345471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.345856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.345886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.346272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.346305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.346759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.346790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.347189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.347220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.347627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.347656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.348042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.348071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.348444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.348475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.348803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.348832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.349271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.349302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.349695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.349732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.350137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.350167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.350535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.350565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.350952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.350982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.351367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.351398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.351814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.351846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.352265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.352297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.352562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.352592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.352977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.353006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.353387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.353419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.353707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.353739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.354122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.354152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.354545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.354576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.354973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.355003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.355390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.355422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.355819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.355850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.356134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.356167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.356539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.356570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.356967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.356997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.166 [2024-07-15 20:44:17.357391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.166 [2024-07-15 20:44:17.357422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.166 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.357805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.357838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.358260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.358291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.358733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.358762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.359156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.359186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.359592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.359622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.360017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.360047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.360327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.360361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.360787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.360818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.361207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.361247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.361648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.361678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.362111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.362141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.362532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.362564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.362951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.362982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.363380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.363410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.363828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.363859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.364253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.364284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.364769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.364800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.365205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.365244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.365683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.365714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.366101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.366132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.366526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.366562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.366955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.366985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.367353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.367387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.367785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.367815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.368146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.368177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.368587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.368619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.369018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.369048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.369441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.369472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.369911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.369942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.370344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.370375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.370797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.370826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.371214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.371256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.371649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.371680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.372080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.372112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.372500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.372533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.372956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.372986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.373276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.373305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.373716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.373747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.374148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.374177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.374547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.374578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.167 [2024-07-15 20:44:17.375741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.167 [2024-07-15 20:44:17.375789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.167 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.376207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.376254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.376653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.376686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.377059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.377092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.377498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.377532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.377932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.377964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.378373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.378404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.378797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.378828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.379170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.379199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.379649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.379680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.380086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.380115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.380537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.380567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.380992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.381022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.381422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.381456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.381829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.381860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.382251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.382282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.382694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.382725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.383005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.383034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.383440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.383471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.383853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.383884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.384159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.384199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.384596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.384628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.385035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.385066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.385451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.385483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.385792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.385821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.386183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.386215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.386660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.386690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.386975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.387008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.387420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.387451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.387784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.387816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.388221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.388260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.388670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.388699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.389089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.389120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.389492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.389524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.389926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.168 [2024-07-15 20:44:17.389956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.168 qpair failed and we were unable to recover it. 00:30:25.168 [2024-07-15 20:44:17.390269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.390299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.390679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.390709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.391109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.391139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.391511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.391542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.391934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.391963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.392336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.392368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.392799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.392827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.393129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.393158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.393431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.393467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.393837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.393867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.394223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.394264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.394542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.394570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.394971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.395000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.395385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.395416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.395737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.395766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.396054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.396086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.396458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.396489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.396866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.396896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.397261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.397292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.397655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.397685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.397979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.398009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.398399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.398428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.398811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.398842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.399249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.399279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.399741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.399770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.400159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.400194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.400483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.400512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.400942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.400972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.401356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.401387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.401776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.401804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.402166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.402197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.402597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.402627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.402927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.402958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.403342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.403372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.403755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.403786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.404151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.404182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.404554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.404583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.404967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.404997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.405326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.405359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.405768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.405797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.169 [2024-07-15 20:44:17.406202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.169 [2024-07-15 20:44:17.406240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.169 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.406644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.406674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.407077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.407106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.407410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.407439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.407851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.407881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.408262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.408292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.408727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.408758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.409127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.409156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.409558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.409590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.410037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.410068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.410464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.410494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.410856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.410886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.411309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.411340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.411735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.411764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.412162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.412191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.412487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.412517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.412946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.412975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.413326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.413359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.413637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.413667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.414060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.414089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.414500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.414531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.414925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.414955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.415329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.415359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.415787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.415816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.416199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.416237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.416554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.416589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.416944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.416974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.417375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.417406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.417661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.417690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.418086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.418116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.418486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.418518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.418781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.418814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.419183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.419212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.419604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.419634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.420048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.420078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.420472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.420503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.420937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.420966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.421350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.421381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.421795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.421826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.422093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.422122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.422519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.422549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.170 [2024-07-15 20:44:17.422941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.170 [2024-07-15 20:44:17.422972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.170 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.423383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.423414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.423675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.423708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.424091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.424122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.424476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.424507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.424900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.424931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.425305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.425336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.425759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.425791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.426182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.426212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.426625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.426655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.427030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.427061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.427446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.427477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.427861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.427890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.428295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.428324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.428691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.428722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.429089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.429118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.429362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.429391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.429815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.429845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.430245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.430277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.430669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.430698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.430954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.430984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.431338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.431368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.431779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.431809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.432179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.432209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.432628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.432665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.433060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.433090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.433462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.433493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.433863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.433893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.434274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.434305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.434583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.434614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.435012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.435043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.435427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.435459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.435728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.435757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.436169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.436198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.436590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.436621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.436875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.436905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.437290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.437320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.437714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.437744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.437997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.438027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.438382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.438414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.438664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.438695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.171 qpair failed and we were unable to recover it. 00:30:25.171 [2024-07-15 20:44:17.439099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.171 [2024-07-15 20:44:17.439129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.439503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.439535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.439916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.439947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.440346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.440377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.440623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.440653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.441029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.441058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.441456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.441487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.441740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.441770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.442167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.442196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.442611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.442643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.442877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.442908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.443197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.443226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.443623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.443653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.444018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.444049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.444438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.444470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.444842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.444872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.445268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.445298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.445581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.445611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.445993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.446022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.446405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.446437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.446843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.446874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.447271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.447304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.447746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.447776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.448135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.448171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.448480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.448514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.448904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.448933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.449329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.449360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.449641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.449673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.450077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.450108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.450498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.450529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.450901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.450931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.451316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.451347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.451758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.451788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.452191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.452222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.452654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.172 [2024-07-15 20:44:17.452685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.172 qpair failed and we were unable to recover it. 00:30:25.172 [2024-07-15 20:44:17.453069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.453100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.453502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.453534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.453931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.453960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.454343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.454375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.454760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.454790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.455075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.455106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.455460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.455491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.455897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.455927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.456326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.456357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.456763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.456793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.457210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.457252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.457666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.457697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.458074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.458104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.458500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.458533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.458933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.458962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.459351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.459384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.459798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.459828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.460248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.460279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.460688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.460718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.461108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.461137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.461540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.461570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.461985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.462014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.462452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.462483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.462839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.462870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.463260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.463290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.463687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.463716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.464123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.464153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.464548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.464579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.464966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.465002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.465455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.465486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.465884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.465914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.466303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.466334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.466729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.466759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.467167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.467197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.467628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.467661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.468045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.468075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.468466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.468496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.468889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.468919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.469326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.469357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.469771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.469801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.470194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.173 [2024-07-15 20:44:17.470223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.173 qpair failed and we were unable to recover it. 00:30:25.173 [2024-07-15 20:44:17.470604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.470635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.471039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.471069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.471454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.471487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.471894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.471925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.472307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.472338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.472749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.472779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.473172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.473201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.473589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.473621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.473991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.474022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.474301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.474336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.474726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.474756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.475044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.475073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.475476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.475507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.475904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.475935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.476200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.476243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.476645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.476675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.477068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.477098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.477488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.477520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.477911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.477940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.478326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.478357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.478792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.478821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.479224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.479268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.479555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.479588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.479968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.479998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.480391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.480423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.480841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.480870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.481256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.481288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.481706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.481742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.482022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.482052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.482461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.482492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.482886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.482915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.483305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.483338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.483749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.483779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.484180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.484209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.484604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.484635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.484904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.484936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.485338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.485369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.485765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.485795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.486184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.486215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.486621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.486651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.174 [2024-07-15 20:44:17.487057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.174 [2024-07-15 20:44:17.487087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.174 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.487495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.487527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.487915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.487946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.488333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.488364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.488762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.488791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.489195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.489226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.489609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.489640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.490028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.490058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.490468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.490500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.490860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.490890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.491301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.491332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.491717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.491748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.492160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.492190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.492593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.492624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.493051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.493082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.493459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.493491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.493888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.493919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.494323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.494354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.494815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.494845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.495241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.495274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.495684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.495714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.496116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.496147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.496535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.496566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.496948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.496978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.497378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.497409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.497821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.497851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.498286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.498318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.498709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.498745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.499123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.499153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.499523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.499556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.499937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.499966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.500348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.500379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.500813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.500843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.501138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.501170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.501557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.501588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.501950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.501981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.502394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.502425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.502829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.502858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.503254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.503285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.503672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.503701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.504113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.504142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.175 [2024-07-15 20:44:17.504431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.175 [2024-07-15 20:44:17.504462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.175 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.504860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.504889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.505258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.505290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.505692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.505720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.506087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.506118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.506416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.506450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.506846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.506875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.507292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.507323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.507723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.507755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.508144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.508173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.508554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.508586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.508980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.509010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.509393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.509424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.509818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.509849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.510249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.510283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.510646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.510676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.511072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.511102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.511503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.511536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.511919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.511949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.512357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.512389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.512811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.512840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.513238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.513271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.513581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.513612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.514025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.514055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.514438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.514471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.514888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.514919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.515296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.515334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.515713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.515743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.516144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.516173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.516570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.516601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.516994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.517024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.517191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.517223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.517617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.517647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.517936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.517964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.518419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.518451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.518861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.176 [2024-07-15 20:44:17.518890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.176 qpair failed and we were unable to recover it. 00:30:25.176 [2024-07-15 20:44:17.519316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.519349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.519754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.519784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.520171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.520201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.520607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.520638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.521044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.521074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.521458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.521491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.521877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.521907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.522355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.522387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.522746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.522775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.523146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.523176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.523553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.523586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.524015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.524045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.524412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.524444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.524829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.524858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.525116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.525147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.525586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.525616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.525974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.526006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.526380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.526411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.526792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.526822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.527225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.527266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.527659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.527688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.177 [2024-07-15 20:44:17.528075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.177 [2024-07-15 20:44:17.528105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.177 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.528558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.528594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.528928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.528958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.529241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.529273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.529691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.529722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.529989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.530018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.530418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.530449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.530852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.530883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.531271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.531303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.531697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.531733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.531986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.532018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.532423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.532455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.532898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.532929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.533314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.533345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.533746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.533776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.534179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.534209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.534511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.534544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.534821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.534854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.535208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.535250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.535660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.535691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.536150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.536181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.536603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.536635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.536836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.536866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.537249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.537281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.537647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.537677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.537939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.537968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.538353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.538384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.538652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.538680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.539068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.539096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.539495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.539526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.539930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.539959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.449 qpair failed and we were unable to recover it. 00:30:25.449 [2024-07-15 20:44:17.540357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.449 [2024-07-15 20:44:17.540388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.540777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.540808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.541195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.541224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.541622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.541652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.542058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.542088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.542509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.542541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.542922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.542953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.543351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.543383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.543749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.543777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.544174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.544203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.544614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.544644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.544995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.545030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.545300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.545332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.545609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.545637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.546023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.546053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.546417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.546450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.546860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.546890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.547285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.547315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.547734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.547770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.548151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.548183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.548554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.548586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.548857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.548888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.549277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.549309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.549713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.549742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.550144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.550173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.550566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.550600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.550881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.550913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.551312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.551344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.551747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.551776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.552167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.552197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.552571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.552603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.553008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.553038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.553404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.553434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.553827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.553857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.554260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.554294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.554730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.554761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.555162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.555193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.555603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.555635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.556102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.556131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.556542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.556573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.450 [2024-07-15 20:44:17.556976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.450 [2024-07-15 20:44:17.557005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.450 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.557417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.557447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.557822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.557852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.558263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.558295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.558688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.558721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.559107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.559141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.559416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.559447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.559849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.559879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.560299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.560330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.560715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.560744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.561127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.561156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.561455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.561487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.561890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.561919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.562301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.562331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.562697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.562727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.563125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.563155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.563567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.563598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.563984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.564015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.564398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.564429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.564722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.564753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.565155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.565186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.565600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.565632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.566056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.566087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.566453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.566485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.566849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.566879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.567271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.567304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.567699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.567728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.568129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.568159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.568537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.568568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.568954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.568985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.569369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.569401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.569815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.569846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.570251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.570283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.570685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.570715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.571107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.571138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.571522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.571553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.571950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.571981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.572358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.572391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.572782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.572811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.573100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.573132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.573502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.451 [2024-07-15 20:44:17.573533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.451 qpair failed and we were unable to recover it. 00:30:25.451 [2024-07-15 20:44:17.573920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.573950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.574333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.574365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.574786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.574816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.575222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.575280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.575670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.575710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.576093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.576124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.576505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.576539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.576898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.576927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.577195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.577226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.577654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.577685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.578087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.578118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.578503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.578533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.578919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.578950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.579340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.579371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.579778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.579808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.580217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.580259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.580674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.580703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.581099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.581131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.581500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.581532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.581929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.581960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.582354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.582386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.582821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.582852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.583130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.583162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.583540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.583571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.583955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.583985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.584367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.584397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.584755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.584785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.585190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.585220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.585630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.585662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.586040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.586069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.586397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.586430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.586837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.586867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.587254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.587286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.587651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.587680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.588085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.588115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.588500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.588533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.588925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.588955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.589341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.589374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.589784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.589814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.590080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.590109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.590490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.452 [2024-07-15 20:44:17.590522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.452 qpair failed and we were unable to recover it. 00:30:25.452 [2024-07-15 20:44:17.590910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.590940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.591375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.591405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.591782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.591811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.592203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.592247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.592661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.592691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.593088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.593118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.593502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.593533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.593922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.593954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.594213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.594256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.594628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.594658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.594926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.594955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.595347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.595379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.595766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.595797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.596199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.596240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.596626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.596656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.597090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.597121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.597388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.597421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.597819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.597850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.598258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.598289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.598593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.598621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.599019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.599050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.599458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.599490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.599886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.599915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.600307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.600338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.600734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.600764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.601162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.601192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.601566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.601597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.601877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.601909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.602299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.602330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.602679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.602711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.603113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.603143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.603524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.603557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.603941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.603971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.604337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.604369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.453 qpair failed and we were unable to recover it. 00:30:25.453 [2024-07-15 20:44:17.604763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.453 [2024-07-15 20:44:17.604792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.605173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.605203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.605610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.605641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.606038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.606068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.606437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.606470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.606874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.606904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.607292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.607325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.607732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.607761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.608131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.608161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.608443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.608480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.608864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.608894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.609301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.609332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.609590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.609620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.610002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.610034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.610421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.610452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.610854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.610884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.611134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.611165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.611558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.611589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.612050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.612079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.612340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.612371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.612788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.612818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.613203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.613242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.613519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.613550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.613923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.613953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.614302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.614331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.614726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.614756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.615151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.615181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.615578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.615610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.616017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.616047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.616436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.616468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.616862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.616891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.617249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.617281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.617680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.617710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.618096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.618126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.618560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.618591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.618992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.619022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.619428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.619462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.619847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.619877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.620265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.620297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.620693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.620723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.621127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.621157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.454 qpair failed and we were unable to recover it. 00:30:25.454 [2024-07-15 20:44:17.621549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.454 [2024-07-15 20:44:17.621582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.621972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.622002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.622267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.622298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.622712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.622743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.623098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.623130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.623518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.623550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.623955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.623985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.624376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.624407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.624886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.624922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.625310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.625341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.625771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.625802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.626201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.626241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.626605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.626635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.627032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.627063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.627425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.627455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.627855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.627885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.628275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.628306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.628775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.628806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.629174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.629204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.629615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.629647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.630024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.630055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.630450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.630482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.630912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.630943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.631354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.631386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.631595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.631626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.632028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.632059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.632459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.632492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.632795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.632826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.633206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.633248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.633582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.633613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.633882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.633912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.634286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.634319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.634755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.634785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.635170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.635200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.635484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.635514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.635896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.635927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.636311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.636343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.636614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.636644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.637056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.637086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.637531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.637562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.637941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.637973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.455 qpair failed and we were unable to recover it. 00:30:25.455 [2024-07-15 20:44:17.638251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.455 [2024-07-15 20:44:17.638282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.638705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.638736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.639140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.639170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.639620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.639653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.639989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.640019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.640429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.640460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.640818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.640849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.641249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.641286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.641651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.641681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.641996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.642026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.642435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.642465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.642851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.642881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.643261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.643292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.643689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.643718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.644125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.644155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.644567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.644599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.644989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.645020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.645390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.645421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.645823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.645854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.646249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.646280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.646672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.646702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.647118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.647149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.647528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.647561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.647941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.647972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.648349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.648381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.648768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.648797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.649165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.649193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.649407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.649437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.649761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.649791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.650097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.650126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.650550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.650582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.650860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.650888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.651131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.651160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.651574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.651605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.652027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.652058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.652436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.652468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.652758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.652790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.653208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.653248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.653526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.653555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.653932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.653963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.654346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.654378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.456 qpair failed and we were unable to recover it. 00:30:25.456 [2024-07-15 20:44:17.654755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.456 [2024-07-15 20:44:17.654786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.655207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.655251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.655555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.655584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.655971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.656000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.656393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.656426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.656831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.656861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.657248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.657295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.657599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.657629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.658008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.658038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.658310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.658340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.658767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.658797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.659178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.659208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.659592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.659624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.660032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.660061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.660444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.660476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.660854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.660884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.661296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.661326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.661703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.661731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.662118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.662149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.662401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.662430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.662609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.662642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.662927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.662960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.663384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.663416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.663802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.663832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.664203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.664246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.664701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.664730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.665119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.665149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.665550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.665581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.665977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.666006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.666271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.666301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.666690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.666719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.667124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.667155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.667462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.667492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.667800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.667829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.668226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.668269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.668737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.457 [2024-07-15 20:44:17.668767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.457 qpair failed and we were unable to recover it. 00:30:25.457 [2024-07-15 20:44:17.669061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.669092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.669489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.669521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.669903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.669934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.670197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.670228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.670664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.670694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.670988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.671016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.671317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.671350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.671644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.671675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.672095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.672124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.672495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.672528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.672907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.672944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.673212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.673253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.673657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.673686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.674086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.674117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.674367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.674397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.674785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.674814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.675223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.675279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.675672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.675702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.676088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.676118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.676350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.676379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.676767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.676796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.677207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.677249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.677680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.677710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.678089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.678120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.678361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.678391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.678800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.678829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.679188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.679218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.679625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.679656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.680054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.680083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.680457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.680486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.680884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.680913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.681290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.681323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.681730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.681760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.682163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.682192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.682537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.682570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.682878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.682909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.683314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.683345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.683758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.683788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.684174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.684204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.684594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.684625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.685022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.685051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.458 qpair failed and we were unable to recover it. 00:30:25.458 [2024-07-15 20:44:17.685420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.458 [2024-07-15 20:44:17.685451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.685818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.685849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.686219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.686258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.686665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.686695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.687067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.687098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.687480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.687515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.687877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.687906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.688303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.688334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.688727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.688758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.689037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.689072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.689437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.689468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.689864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.689895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.690291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.690321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.690609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.690640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.691044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.691074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.691446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.691479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.691878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.691908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.692287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.692320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.692727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.692756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.693155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.693185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.693584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.693615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.693993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.694022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.694408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.694439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.694855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.694885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.695284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.695316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.695718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.695747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.696133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.696163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.696555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.696586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.696989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.697019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.697418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.697449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.697712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.697741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.698147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.698178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.698563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.698595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.698976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.699006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.699401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.699432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.699828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.699858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.700214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.700255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.700653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.700683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.701071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.701101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.701482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.701514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.701914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.701943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.459 qpair failed and we were unable to recover it. 00:30:25.459 [2024-07-15 20:44:17.702334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.459 [2024-07-15 20:44:17.702365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.702750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.702781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.703179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.703208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.703559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.703591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.703984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.704014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.704280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.704310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.704745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.704773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.705178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.705207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.705608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.705643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.706024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.706055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.706374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.706404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.706584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.706615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.707034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.707063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.707448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.707480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.707757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.707787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.708193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.708224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.708626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.708656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.709042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.709071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.709472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.709503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.709864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.709895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.710284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.710315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.710705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.710734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.711104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.711131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.711501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.711530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.711921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.711947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.712339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.712368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.712775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.712802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.713210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.713248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.713638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.713665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.714056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.714085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.714423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.714452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.714744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.714773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.715183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.715213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.715598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.715629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.716067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.716097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.716487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.716519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.716943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.716974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.717373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.717405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.717838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.717870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.718254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.718287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.718697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.460 [2024-07-15 20:44:17.718729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.460 qpair failed and we were unable to recover it. 00:30:25.460 [2024-07-15 20:44:17.719121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.719152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.719589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.719622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.720048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.720078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.720459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.720490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.720880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.720912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.721304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.721336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.721614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.721646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.722077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.722113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.722509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.722542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.722956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.722987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.723420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.723451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.723837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.723868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.724263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.724295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.724757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.724789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.725190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.725596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.725628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.726015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.726045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.726441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.726473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.726884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.726915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.727274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.727306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.727591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.727621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.728037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.728069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.728356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.728389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.728660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.728693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.729017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.729049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.729451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.729484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.729659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.729693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.730021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.730052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.730454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.730486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.730888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.730920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.731326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.731359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.731722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.731753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.732022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.732055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.732462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.732494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.732918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.732949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.733347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.733381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.733774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.733803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.734058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.461 [2024-07-15 20:44:17.734089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.461 qpair failed and we were unable to recover it. 00:30:25.461 [2024-07-15 20:44:17.734498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.734529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.734919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.734950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.735340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.735372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.735743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.735774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.736183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.736212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.736705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.736736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.737128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.737157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.737556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.737586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.737944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.737975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.738347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.738390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.738665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.738694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.739092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.739122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.739505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.739535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.739929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.739958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.740354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.740385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.740784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.740814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.741192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.741222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.741633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.741663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.742049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.742079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.742528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.742559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.742961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.742990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.743400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.743433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.743785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.743817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.744244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.744276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.744709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.744739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.745129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.745158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.745569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.745600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.746009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.746040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.746417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.746448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.746849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.746878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.747262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.747295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.747692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.747722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.748131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.748160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.748543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.748575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.748963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.748993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.749385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.749416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.749812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.749842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.750124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.750152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.462 [2024-07-15 20:44:17.750465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.462 [2024-07-15 20:44:17.750496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.462 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.750904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.750934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.751334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.751366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.751759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.751789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.752196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.752225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.752596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.752627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.753025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.753056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.753444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.753474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.753866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.753895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.754298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.754330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.754695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.754724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.755116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.755150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.755429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.755460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.755861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.755890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.756305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.756335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.756733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.756762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.757151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.757181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.757582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.757612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.758022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.758051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.758447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.758477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.758868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.758899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.759296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.759328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.759736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.759766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.760178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.760208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.760642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.760673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.761073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.761104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.761490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.761523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.761901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.761931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.762313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.762346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.762782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.762811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.763213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.763258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.763602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.763634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.764026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.764056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.764458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.764489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.764891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.764920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.765322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.765352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.765749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.765780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.766172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.766203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.766619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.766656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.767037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.767069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.767491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.767523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.463 [2024-07-15 20:44:17.767922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.463 [2024-07-15 20:44:17.767952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.463 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.768352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.768383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.768775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.768804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.769209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.769246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.769639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.769669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.770072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.770102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.770511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.770542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.770929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.770960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.771360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.771392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.771790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.771820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.772207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.772244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.772674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.772705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.773101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.773131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.773536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.773568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.773950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.773980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.774261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.774294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.774738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.774767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.775173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.775203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.775596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.775628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.776018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.776048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.776452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.776484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.776881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.776910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.777187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.777218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.777626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.777656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.777916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.777946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.778357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.778388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.778669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.778700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.779088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.779119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.779508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.779538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.779792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.779823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.780209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.780249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.780577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.780606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.781049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.781079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.781356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.781388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.781782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.781812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.782173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.782204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.782595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.782626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.783031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.783067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.783451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.783483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.783734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.783766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.784168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.784198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.464 qpair failed and we were unable to recover it. 00:30:25.464 [2024-07-15 20:44:17.784611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.464 [2024-07-15 20:44:17.784643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.785033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.785063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.785453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.785485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.785884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.785915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.786311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.786342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.786746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.786777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.787167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.787196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.787594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.787626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.788007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.788036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.788354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.788383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.788784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.788814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.789177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.789208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.789588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.789619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.790009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.790039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.790436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.790467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.790755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.790786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.791184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.791214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.791616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.791648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.792035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.792064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.792400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.792431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.792853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.792883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.793163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.793193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.793608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.793640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.794068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.794100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.794470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.794501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.794887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.794916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.795308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.795340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.795706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.795738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.796136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.796165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.796556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.796587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.796972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.797003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.797380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.797411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.797806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.797836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.798220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.798261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.798670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.798699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.799101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.799131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.799544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.799581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.799941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.799972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.800384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.800415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.800811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.800841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.801247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.801278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.465 [2024-07-15 20:44:17.801678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.465 [2024-07-15 20:44:17.801707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.465 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.801972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.802001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.802294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.802328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.802730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.802760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.803167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.803197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.803630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.803663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.804057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.804087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.804461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.804494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.804880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.804910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.805303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.805333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.805714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.805746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.806021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.806055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.806438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.806470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.806853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.806883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.807282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.807315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.807716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.807746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.808130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.808160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.808554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.808587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.808984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.809015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.809411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.809442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.809826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.809856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.810246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.810278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.810654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.810684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.811081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.811111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.811524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.811555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.811941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.811971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.812366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.812398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.812810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.812839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.813227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.813269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.813552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.813584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.813984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.814014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.814402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.814433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.814851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.814880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.815270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.815303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.815719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.815750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.816148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.816185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.466 qpair failed and we were unable to recover it. 00:30:25.466 [2024-07-15 20:44:17.816601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.466 [2024-07-15 20:44:17.816633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-07-15 20:44:17.817020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.467 [2024-07-15 20:44:17.817051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-07-15 20:44:17.817444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.467 [2024-07-15 20:44:17.817475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.467 qpair failed and we were unable to recover it. 00:30:25.467 [2024-07-15 20:44:17.817892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.467 [2024-07-15 20:44:17.817921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.739 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.818309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.818343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.818739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.818769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.819174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.819205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.819648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.819680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.820060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.820091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.820346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.820379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.820793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.820823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.821223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.821266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.821674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.821703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.822105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.822136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.822511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.822541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.822988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.823019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.823435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.823466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.823859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.823891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.824292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.824324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.824731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.824761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.825028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.825061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.825451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.825483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.825883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.825915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.826328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.826361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.826779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.826811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.827201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.827247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.827684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.827714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.827971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.828003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.828426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.828457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.828846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.828876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.829275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.829307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.829712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.829743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.830131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.830162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.830555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.830587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.830941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.830972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.831369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.831400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.831681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.831710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.832102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.832132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.832538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.832569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.832845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.832882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.833269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.833300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.833688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.833719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.834074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.834105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.834495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.834527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.740 qpair failed and we were unable to recover it. 00:30:25.740 [2024-07-15 20:44:17.834854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.740 [2024-07-15 20:44:17.834884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.835274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.835304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.835723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.835752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.836154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.836185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.836646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.836676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.837066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.837097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.837494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.837525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.837924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.837955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.838350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.838382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.838769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.838800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.839264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.839297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.839684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.839720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.840102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.840133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.840526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.840557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.840954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.840984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.841391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.841421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.841813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.841844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.842222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.842265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.842698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.842728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.843159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.843190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.843616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.843649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.844042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.844072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.844480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.844512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.844910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.844940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.845333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.845364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.845752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.845781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.846176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.846206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.846623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.846653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.847044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.847076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.847464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.847496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.847908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.847939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.848342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.848373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.848788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.848818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.849202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.849242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.849645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.849675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.850100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.850134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.850523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.850553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.850938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.850969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.851368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.851399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.851799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.851828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.852187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.741 [2024-07-15 20:44:17.852218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.741 qpair failed and we were unable to recover it. 00:30:25.741 [2024-07-15 20:44:17.852633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.852662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.853059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.853089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.853489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.853521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.853906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.853937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.854325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.854356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.854762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.854792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.855159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.855191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.855603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.855635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.856026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.856056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.856452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.856486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.856893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.856923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.857316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.857347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.857742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.857774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.858170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.858201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.858685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.858716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.859103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.859133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.859531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.859564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.859963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.859994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.860407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.860440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.860826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.860856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.861250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.861282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.861696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.861726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.862129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.862159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.862547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.862579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.862865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.862897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.863312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.863343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.863742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.863772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.864162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.864192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.864561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.864592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.864993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.865023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.865300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.865335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.865750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.865783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.866164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.866195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.866471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.866506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.866919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.866955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.867340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.867372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.867753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.867788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.868180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.868210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.868612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.868642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.868931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.868963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.742 [2024-07-15 20:44:17.869347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.742 [2024-07-15 20:44:17.869378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.742 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.869777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.869808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.870218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.870257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.870679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.870709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.871105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.871134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.871530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.871562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.871966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.871996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.872374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.872406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.872799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.872829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.873107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.873139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.873538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.873570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.874003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.874034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.874539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.874571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.874994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.875024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.875419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.875451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.875916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.875947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.876290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.876320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.876702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.876732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.877160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.877190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.877541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.877574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.877921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.877951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.878413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.878445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.878843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.878873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.879254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.879287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.879708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.879737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.880149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.880180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.880517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.880548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.880957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.880987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.881401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.881432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.881851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.881881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.882247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.882282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.882678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.882707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.883104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.883134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 [2024-07-15 20:44:17.883548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.743 [2024-07-15 20:44:17.883578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af0000b90 with addr=10.0.0.2, port=4420 00:30:25.743 qpair failed and we were unable to recover it. 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Write completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Write completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Write completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Write completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Write completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.743 Read completed with error (sct=0, sc=8) 00:30:25.743 starting I/O failed 00:30:25.744 Write completed with error (sct=0, sc=8) 00:30:25.744 starting I/O failed 00:30:25.744 Write completed with error (sct=0, sc=8) 00:30:25.744 starting I/O failed 00:30:25.744 Write completed with error (sct=0, sc=8) 00:30:25.744 starting I/O failed 00:30:25.744 Write completed with error (sct=0, sc=8) 00:30:25.744 starting I/O failed 00:30:25.744 Write completed with error (sct=0, sc=8) 00:30:25.744 starting I/O failed 00:30:25.744 Write completed with error (sct=0, sc=8) 00:30:25.744 starting I/O failed 00:30:25.744 Read completed with error (sct=0, sc=8) 00:30:25.744 starting I/O failed 00:30:25.744 [2024-07-15 20:44:17.883928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.744 [2024-07-15 20:44:17.884360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.884382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.884813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.884827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.885082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.885094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.885471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.885485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.885850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.885864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.886122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.886134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.886379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.886392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.886788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.886803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.887160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.887174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.887572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.887586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.887854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.887866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.888255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.888268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.888629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.888643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.888991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.889003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.889243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.889255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.889623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.889636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.889997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.890009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.890440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.890455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.890854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.890868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.891252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.891265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.891642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.891655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.892019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.892033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.892272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.892285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.892635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.892647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.893035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.893049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.893375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.893389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.893763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.893775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.894122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.894134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.894457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.894472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.894856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.894871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.744 qpair failed and we were unable to recover it. 00:30:25.744 [2024-07-15 20:44:17.895265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.744 [2024-07-15 20:44:17.895278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.895683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.895695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.896060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.896075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.896556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.896568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.896921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.896934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.897317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.897330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.897605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.897617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.897954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.897967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.898309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.898320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.898562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.898575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.898940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.898952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.899284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.899296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.899727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.899741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.900079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.900093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.900520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.900534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.900657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.900667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.901065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.901078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.901455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.901469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.901813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.901825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.902344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.902359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.902758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.902771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.903151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.903165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.903531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.903545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.904154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.904192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.904646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.904659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.904997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.905008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.905391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.905403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.905767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.905778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.906140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.906152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.906398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.906410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.906548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.906560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.906916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.906931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.907312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.907324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.907703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.907713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.907944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.907955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.908155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.908169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.908589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.908601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.908839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.908854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.909223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.909245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.909626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.909637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.910015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.910025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.745 [2024-07-15 20:44:17.910371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.745 [2024-07-15 20:44:17.910383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.745 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.910611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.910622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.910841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.910854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.911240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.911253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.911636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.911646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.912001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.912013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.912391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.912402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.912646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.912657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.913034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.913045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.913434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.913444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.913863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.913874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.914261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.914272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.914656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.914667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.915049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.915060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.915306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.915316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.915693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.915705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.916082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.916096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.916494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.916508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.916895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.916905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.917300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.917311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.917718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.917728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.918075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.918086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.918439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.918450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.918797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.918808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.919198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.919210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.919492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.919504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.919826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.919836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.920193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.920203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.920473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.920484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.920949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.920960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.921160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.921171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.921512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.921523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.921724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.921738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.922017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.922027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.922393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.922405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.922755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.922769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.923032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.923044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.923377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.923388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.923781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.923791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.924129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.924141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.924511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.924521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.924829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.746 [2024-07-15 20:44:17.924848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.746 qpair failed and we were unable to recover it. 00:30:25.746 [2024-07-15 20:44:17.925271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.925282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.925652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.925662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.926029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.926039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.926399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.926410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.926781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.926792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.927152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.927162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.927547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.927560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.927904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.927915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.928290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.928303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.928652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.928663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.929012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.929022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.929397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.929409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.929778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.929790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.930154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.930165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.930509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.930520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.930844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.930854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.931216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.931241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.931618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.931628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.932004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.932015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.932392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.932402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.932742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.932753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.932964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.932976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.933348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.933358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.933760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.933769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.934038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.934049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.934432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.934442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.934814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.934827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.935199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.935211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.935469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.935480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.935738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.935749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.936090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.936100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.936383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.936393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.936790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.936801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.937108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.937118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.937490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.937501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.937878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.937889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.938263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.938274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.938697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.938707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.939071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.939080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.939451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.939463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.939851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.747 [2024-07-15 20:44:17.939862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.747 qpair failed and we were unable to recover it. 00:30:25.747 [2024-07-15 20:44:17.940228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.940247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.940616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.940628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.941013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.941027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.941500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.941557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.942004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.942017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.942396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.942408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.942764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.942774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.943148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.943157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.943509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.943519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.943774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.943786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.944176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.944188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.944433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.944444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.944767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.944778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.945156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.945165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.945513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.945524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.945894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.945905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.946276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.946287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.946620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.946631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.946883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.946893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.947135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.947145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.947467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.947480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.947813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.947823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.948047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.948059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.948297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.948307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.948717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.948729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.949072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.949084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.949455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.949465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.949848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.949858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.950073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.950084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.950447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.950461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.950833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.950843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.951220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.951238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.951544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.951554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.951803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.951814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.952180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.952191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.952553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.748 [2024-07-15 20:44:17.952563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.748 qpair failed and we were unable to recover it. 00:30:25.748 [2024-07-15 20:44:17.952945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.952956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.953326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.953338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.953704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.953715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.954077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.954087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.954427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.954438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.954814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.954824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.955202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.955213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.955597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.955607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.955820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.955829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.956162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.956172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.956432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.956443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.956850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.956862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.957242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.957253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.957576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.957587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.957837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.957848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.958241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.958252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.958693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.958706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.959003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.959013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.959393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.959404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.959754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.959764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.960174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.960187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.960630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.960642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.960845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.960856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.961098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.961109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.961497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.961509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.961880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.961893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.962270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.962281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.962703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.962713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.963084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.963094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.963301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.963311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.963626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.963637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.964005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.964015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.964390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.964401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.964775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.964786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.965157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.965167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.965483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.965495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.965876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.965886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.966263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.966276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.966642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.966653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.967025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.967036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.967410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.967421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.749 [2024-07-15 20:44:17.967765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.749 [2024-07-15 20:44:17.967776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.749 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.968157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.968167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.968550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.968561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.968935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.968947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.969316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.969328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.969658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.969669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.970045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.970055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.970387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.970398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.970780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.970789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.971165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.971175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.971539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.971550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.971911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.971922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.972325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.972336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.972720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.972731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.973107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.973118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.973494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.973505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.973771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.973783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.974026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.974037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.974394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.974404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.974746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.974755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.975133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.975143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.975498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.975509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.975780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.975790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.976204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.976215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.976560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.976571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.976996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.977006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.977342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.977352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.977703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.977713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.978046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.978056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.978497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.978508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.978718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.978729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.979115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.979126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.979378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.979390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.979780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.979792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.980010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.980021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.980277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.980287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.980692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.980702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.980914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.980926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.981257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.981268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.981638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.981649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.981986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.981997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.750 [2024-07-15 20:44:17.982349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.750 [2024-07-15 20:44:17.982365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.750 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.982726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.982736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.982978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.982987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.983348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.983364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.983748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.983758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.984115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.984125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.984325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.984340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.984719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.984729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.985134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.985144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.985530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.985541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.985888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.985898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.986265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.986275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.986641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.986651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.986988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.986998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.987327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.987338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.987703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.987713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.988074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.988084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.988312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.988324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.988706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.988717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.989105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.989115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.989489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.989505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.989883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.989893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.990282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.990293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.990640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.990650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.991022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.991032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.991418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.991429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.991632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.991645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.992004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.992014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.992426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.992436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.992790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.992800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.993159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.993168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.993514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.993525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.993870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.993880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.994221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.994241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.994579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.994589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.994949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.994959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.995204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.995213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.995579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.995590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.995957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.995967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.996293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.996303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.996676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.996686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.996924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.751 [2024-07-15 20:44:17.996934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.751 qpair failed and we were unable to recover it. 00:30:25.751 [2024-07-15 20:44:17.997296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.997306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:17.997678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.997688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:17.998045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.998055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:17.998415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.998425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:17.998787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.998797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:17.999160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.999170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:17.999558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.999568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:17.999944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:17.999954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.000334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.000346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.000702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.000713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.001069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.001080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.001433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.001444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.001632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.001643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.001909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.001920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.001984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.001994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.002315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.002326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.002715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.002725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.003098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.003107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.003472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.003482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.003854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.003864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.004246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.004256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.004651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.004661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.005030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.005040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.005415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.005426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.005799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.005812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.006189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.006198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.006538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.006550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.006811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.006821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.007159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.007169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.007462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.007472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.007830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.007840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.008192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.008202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.008590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.752 [2024-07-15 20:44:18.008603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.752 qpair failed and we were unable to recover it. 00:30:25.752 [2024-07-15 20:44:18.008819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.008830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.009212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.009222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.009593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.009603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.009978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.009988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.010357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.010367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.010710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.010721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.011089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.011099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.011450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.011462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.011815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.011824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.012194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.012203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.012426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.012436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.012929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.012939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.013276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.013287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.013626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.013636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.014008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.014019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.014386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.014397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.014782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.014792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.015012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.015022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.015345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.015355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.015704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.015716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.016055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.016065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.016439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.016449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.016825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.016834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.017204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.017213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.017520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.017530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.017916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.017927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.018293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.018306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.018673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.018683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.018946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.018956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.019296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.019306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.019673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.019683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.020054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.020066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.020431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.020442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.020810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.020819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.021195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.021205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.021540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.021550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.021908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.021918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.022278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.753 [2024-07-15 20:44:18.022289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.753 qpair failed and we were unable to recover it. 00:30:25.753 [2024-07-15 20:44:18.022641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.022652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.023019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.023030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.023286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.023296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.023670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.023680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.024070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.024080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.024418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.024428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.024777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.024789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.025096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.025106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.025498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.025509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.025866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.025876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.026226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.026245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.026583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.026593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.026961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.026972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.027346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.027356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.027738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.027748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.028099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.028114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.028481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.028491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.028865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.028876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.029253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.029264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.029696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.029706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.030076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.030086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.030335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.030346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.030705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.030714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.031092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.031102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.031498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.031509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.031886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.031897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.032200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.032210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.032583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.032594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.032818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.032829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.033209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.033220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.033583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.033593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.033961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.033972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.034361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.034372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.034742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.034752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.035120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.035129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.035464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.035474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.035898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.754 [2024-07-15 20:44:18.035908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.754 qpair failed and we were unable to recover it. 00:30:25.754 [2024-07-15 20:44:18.036243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.036255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.036580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.036590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.036961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.036970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.037206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.037215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.037487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.037497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.037832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.037846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.038226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.038242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.038607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.038617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.038820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.038829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.039152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.039162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.039508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.039519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.039886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.039896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.040269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.040279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.040616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.040626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.041045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.041055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.041404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.041415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.041786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.041795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.042164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.042173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.042510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.042521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.042722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.042732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.043056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.043066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.043401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.043411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.043809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.043819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.044148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.044157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.044526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.044536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.044904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.044914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.045140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.045151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.045526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.045537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.045841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.045851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.046223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.046239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.046619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.046629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.046972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.046982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.047352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.047363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.047697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.047707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.048075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.048084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.048444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.048454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.048824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.048834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.049201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.049211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.049574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.049584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.049939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.049950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.050133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.755 [2024-07-15 20:44:18.050144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.755 qpair failed and we were unable to recover it. 00:30:25.755 [2024-07-15 20:44:18.050523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.050535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.050890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.050900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.051262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.051272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.051617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.051626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.051929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.051938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.052307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.052319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.052676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.052686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.053050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.053059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.053402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.053412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.053779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.053788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.054157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.054168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.054519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.054530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.054895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.054904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.055273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.055284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.055615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.055624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.055995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.056005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.056370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.056382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.056626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.056635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.057004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.057014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.057383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.057394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.057752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.057762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.058120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.058130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.058361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.058371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.058735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.058744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.059113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.059124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.059385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.059395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.059757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.059767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.060137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.060147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.060525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.060537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.060892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.060902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.061306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.061318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.061691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.061701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.062068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.062080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.062419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.062429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.062794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.062803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.756 [2024-07-15 20:44:18.063187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.756 [2024-07-15 20:44:18.063198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.756 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.063566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.063578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.063948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.063958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.064317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.064328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.064750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.064760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.065043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.065054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.065272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.065284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.065521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.065531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.065865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.065875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.066240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.066250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.066635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.066645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.067011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.067021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.067397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.067407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.067789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.067800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.068122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.068131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.068483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.068495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.068854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.068864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.069245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.069255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.069580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.069590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.069965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.069976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.070345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.070356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.070704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.070713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.071098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.071107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.071493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.071503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.071869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.071881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.072250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.072261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.072626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.072637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.073007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.073016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.073371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.073382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.073749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.073759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.074112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.074123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.074484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.074495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.074802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.074811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.075176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.075186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.075558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.075568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.075940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.075949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.757 qpair failed and we were unable to recover it. 00:30:25.757 [2024-07-15 20:44:18.076312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.757 [2024-07-15 20:44:18.076322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.076719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.076729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.076974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.076984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.077347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.077358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.077688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.077697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.078066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.078075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.078440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.078451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.078823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.078832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.079209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.079219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.079456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.079469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.079830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.079839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.080211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.080220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.080602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.080612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.080974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.080984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.081351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.081363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.081768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.081778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.082106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.082116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.082461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.082471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.082844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.082854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.083201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.083212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.083562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.083573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.083944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.083955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.084320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.084330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.084660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.084669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.084870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.084881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.085127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.085139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.085482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.085494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.085696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.085707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.085943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.085954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.086255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.086266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.086622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.086632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.086962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.086973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.087316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.087326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.087668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.087678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.088059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.088071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.088326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.088337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.088688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.758 [2024-07-15 20:44:18.088698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.758 qpair failed and we were unable to recover it. 00:30:25.758 [2024-07-15 20:44:18.089064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.089074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.089454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.089465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.089817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.089827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.090195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.090205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.090575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.090585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.090949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.090958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.091333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.091343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.091770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.091780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.092119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.092130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.092493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.092504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.092874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.092884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.093245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.093255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.093629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.093639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.093841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.093851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.094217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.094227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.094592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.094602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.094851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.094863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.095210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.095219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.095605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.095615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.095981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.095993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.096384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.096395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.096726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.096735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.097101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.097112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.097375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.097386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.097735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.097744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.098114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.098123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.098507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.098517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.098889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.098899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.099275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.099287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.099518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.099527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.099893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.099903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.100266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.100277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.100626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.100635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.101004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.101014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.101382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.101393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.101728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.101738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.102100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.102109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.102465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.102476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.102779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.102789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.103003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.103013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.103400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.103410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.759 [2024-07-15 20:44:18.103737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.759 [2024-07-15 20:44:18.103748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.759 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.104120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.104130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.104487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.104497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.104864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.104874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.105249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.105260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.105590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.105602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.105820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.105832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.106215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.106225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.106588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.106598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.106951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.106960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:25.760 [2024-07-15 20:44:18.107295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:25.760 [2024-07-15 20:44:18.107305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:25.760 qpair failed and we were unable to recover it. 00:30:26.032 [2024-07-15 20:44:18.107644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.032 [2024-07-15 20:44:18.107657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.032 qpair failed and we were unable to recover it. 00:30:26.032 [2024-07-15 20:44:18.108022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.032 [2024-07-15 20:44:18.108033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.032 qpair failed and we were unable to recover it. 00:30:26.032 [2024-07-15 20:44:18.108401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.032 [2024-07-15 20:44:18.108412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.032 qpair failed and we were unable to recover it. 00:30:26.032 [2024-07-15 20:44:18.108748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.032 [2024-07-15 20:44:18.108758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.032 qpair failed and we were unable to recover it. 00:30:26.032 [2024-07-15 20:44:18.109126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.032 [2024-07-15 20:44:18.109135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.032 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.109486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.109496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.109835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.109845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.110199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.110212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.110450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.110461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.110818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.110828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.111171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.111181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.111599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.111609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.111939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.111949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.112314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.112324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.112710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.112721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.113097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.113107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.113335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.113345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.113671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.113681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.114089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.114099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.114494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.114504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.114707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.114719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.115073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.115086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.115448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.115458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.115827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.115836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.116204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.116213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.116646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.116656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.116996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.117007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.117377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.117387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.117752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.117762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.118126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.118135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.118348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.118358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.118693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.118702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.118908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.118920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.119239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.119249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.119613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.119623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.119955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.119966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.120292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.120303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.120625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.120635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.120996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.121005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.121372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.121383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.121825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.121836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.122165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.122175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.122532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.122542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.033 [2024-07-15 20:44:18.122836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.033 [2024-07-15 20:44:18.122846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.033 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.123190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.123200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.123500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.123509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.123856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.123867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.124265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.124275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.124658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.124667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.125016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.125026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.125362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.125372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.125799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.125808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.126140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.126153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.126503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.126513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.126832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.126841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.127209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.127218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.127559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.127568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.127935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.127945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.128313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.128323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.128766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.128776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.129121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.129131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.129488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.129498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.129838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.129847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.130178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.130188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.130497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.130507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.130753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.130764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.131122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.131132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.131528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.131538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.131888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.131897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.132152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.132161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.132493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.132504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.132854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.132865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.133247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.133258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.133622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.133632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.133963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.133972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.134286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.134296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.134561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.134571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.134870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.134879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.135130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.135141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.135510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.135523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.135799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.135809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.136038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.136048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.136341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.136352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.136709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.136719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.034 qpair failed and we were unable to recover it. 00:30:26.034 [2024-07-15 20:44:18.137042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.034 [2024-07-15 20:44:18.137051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.137296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.137306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.137546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.137555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.137796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.137806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.138157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.138167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.138514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.138528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.138725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.138734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.139108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.139117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.139476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.139486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.139867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.139877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.140113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.140122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.140432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.140442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.140817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.140827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.141213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.141222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.141560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.141570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.141934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.141944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.142297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.142309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.142676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.142686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.142934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.142944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.143297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.143308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.143670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.143679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.144042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.144052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.144393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.144403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.144708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.144718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.145089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.145099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.145438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.145448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.145860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.145870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.146210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.146221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.146588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.146599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.146961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.146970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.147340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.147351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.147706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.147715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.148116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.148128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.148385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.148395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.148730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.148741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.149108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.149119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.149471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.149482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.149834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.149844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.150215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.150224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.150610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.150619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.150947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.150958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.151352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.035 [2024-07-15 20:44:18.151363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.035 qpair failed and we were unable to recover it. 00:30:26.035 [2024-07-15 20:44:18.151577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.151586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.151968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.151977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.152345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.152358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.152716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.152728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.152935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.152944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.153022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.153034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.153359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.153369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.153800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.153809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.154108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.154118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.154519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.154528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.154720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.154731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.155092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.155104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.155463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.155474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.155845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.155855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.156262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.156273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.156607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.156616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.157074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.157084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.157420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.157430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.157804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.157815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.158187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.158197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.158646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.158657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.158841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.158851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.159251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.159262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.159595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.159604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.159926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.159936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.160285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.160296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.160641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.160650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.160749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.160758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.160988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.160997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.161340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.161350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.161749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.161759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.162188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.162198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.162577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.162587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.162950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.162960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.163294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.163305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.163672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.163681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.164063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.164074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.164433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.164443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.164692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.164702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.164923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.164933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.036 qpair failed and we were unable to recover it. 00:30:26.036 [2024-07-15 20:44:18.165115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.036 [2024-07-15 20:44:18.165124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.165496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.165506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.165853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.165863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.166212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.166222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.166594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.166606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.166862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.166872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.167236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.167246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.167591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.167600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.167968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.167978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.168175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.168185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.168535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.168546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.168908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.168918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.169264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.169275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.169666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.169675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.170037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.170046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.170450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.170460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.170826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.170836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.171225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.171239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.171568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.171583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.171798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.171808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.172163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.172172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.172392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.172402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.172769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.172780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.173116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.173126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.173479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.173490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.173867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.173876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.174215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.174226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.174627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.174637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.174963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.174972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.175393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.175404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.175805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.175815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.176183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.176193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.176577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.176587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.176965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.176975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.177340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.037 [2024-07-15 20:44:18.177351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.037 qpair failed and we were unable to recover it. 00:30:26.037 [2024-07-15 20:44:18.177724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.177734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.177970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.177981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.178211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.178221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.178606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.178616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.178949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.178958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.179186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.179196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.179421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.179431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.179823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.179834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.180048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.180058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.180407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.180417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.180755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.180767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.181128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.181138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.181349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.181359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.181682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.181692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.182057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.182066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.182430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.182440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.182651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.182661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.183011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.183020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.183246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.183258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.183590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.183600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.183966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.183977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.184295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.184306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.184538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.184548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.184857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.184866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.185234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.185245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.185575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.185585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.185946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.185956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.186319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.186330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.186740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.186750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.187149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.187159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.187501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.187511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.187886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.187895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.188256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.188267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.188637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.188646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.189007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.189017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.189381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.189391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.189757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.189766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.189999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.190012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.190342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.190353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.190692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.190701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.191063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.191073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.038 qpair failed and we were unable to recover it. 00:30:26.038 [2024-07-15 20:44:18.191302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.038 [2024-07-15 20:44:18.191312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.191627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.191637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.191985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.191994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.192351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.192361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.192786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.192797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.193136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.193146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.193474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.193484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.193844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.193854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.194218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.194227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.194568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.194578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.194780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.194791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.195094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.195104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.195466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.195476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.195842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.195851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.196251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.196261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.196617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.196627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.197018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.197028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.197246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.197256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.197637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.197646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.198009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.198020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.198381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.198391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.198777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.198788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.199169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.199179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.199526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.199536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.199908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.199918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.200267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.200277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.200615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.200624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.200960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.200969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.201332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.201342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.201706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.201716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.202081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.202090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.202423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.202433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.202793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.202803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.203158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.203168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.203516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.203528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.203871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.203881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.204252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.204262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.204687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.204696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.205059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.205068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.205432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.205443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.205764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.039 [2024-07-15 20:44:18.205774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.039 qpair failed and we were unable to recover it. 00:30:26.039 [2024-07-15 20:44:18.206130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.206140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.206512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.206523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.206871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.206880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.207226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.207242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.207409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.207419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.207745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.207755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.208126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.208136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.208493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.208503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.208869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.208879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.209163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.209173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.209551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.209561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.209762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.209773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.210197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.210207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.210589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.210599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.210969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.210978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.211356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.211366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.211692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.211702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.212066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.212076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.212419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.212430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.212793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.212802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.213165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.213175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.213537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.213548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.213893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.213904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.214265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.214277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.214625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.214635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.215012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.215021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.215385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.215395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.215821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.215831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.216169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.216178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.216362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.216372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.216557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.216569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.216937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.216946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.217289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.217300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.217537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.217547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.217915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.217925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.218286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.218296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.218644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.218653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.219015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.219025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.219390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.219401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.219728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.219737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.220103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.220112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.040 [2024-07-15 20:44:18.220483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.040 [2024-07-15 20:44:18.220493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.040 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.220903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.220913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.221150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.221160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.221509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.221519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.221723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.221734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.222090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.222099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.222452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.222462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.222818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.222828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.223198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.223208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.223580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.223593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.223939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.223949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.224288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.224298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.224642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.224652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.224845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.224855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.225178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.225188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.225539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.225549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.225912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.225921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.226282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.226292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.226649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.226659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.226984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.226993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.227432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.227442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.227781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.227790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.228152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.228162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.228495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.228505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.228869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.228878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.229279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.229290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.229613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.229623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.229998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.230008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.230363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.230374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.230450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.230459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.230731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.230740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.230999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.231008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.231369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.231379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.231714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.231724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.232070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.232079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.232404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.232414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.232764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.041 [2024-07-15 20:44:18.232773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.041 qpair failed and we were unable to recover it. 00:30:26.041 [2024-07-15 20:44:18.232982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.232992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.233382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.233393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.233727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.233737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.234168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.234178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.234517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.234527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.234884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.234893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.235256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.235266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.235585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.235594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.235984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.235994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.236371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.236382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.236743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.236753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.237119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.237129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.237461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.237471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.237816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.237825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.238174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.238184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.238523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.238534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.238863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.238873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.239218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.239228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.239638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.239648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.239975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.239985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.240224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.240238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.240598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.240608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.240954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.240963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.241306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.241316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.241693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.241703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.242049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.242059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.242177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.242187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.242547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.242557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.242816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.242825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.243185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.243194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.243525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.243535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.243868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.243877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.244237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.244246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.244603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.244613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.244976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.244986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.245337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.245347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.245693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.245702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.246050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.246059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.246417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.246428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.246783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.246793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.247144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.042 [2024-07-15 20:44:18.247155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.042 qpair failed and we were unable to recover it. 00:30:26.042 [2024-07-15 20:44:18.247491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.247501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.247848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.247857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.248201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.248210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.248572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.248583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.248956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.248966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.249316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.249326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.249659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.249668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.249971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.249981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.250354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.250365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.250688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.250698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.251067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.251077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.251426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.251437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.251784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.251793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.252160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.252169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.252496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.252506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.252867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.252877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.253260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.253271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.253625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.253634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.253982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.253992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.254339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.254349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.254701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.254710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.255060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.255070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.255436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.255447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.255675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.255685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.256066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.256076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.256426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.256436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.256756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.256768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.257122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.257132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.257481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.257491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.257771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.257780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.258108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.258117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.258494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.258504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.258864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.258874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.259212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.259222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.259472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.259482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.259830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.259839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.260199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.260208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.260532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.260543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.260888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.260897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.261263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.261273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.261665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.043 [2024-07-15 20:44:18.261676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.043 qpair failed and we were unable to recover it. 00:30:26.043 [2024-07-15 20:44:18.262019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.262029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.262446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.262456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.262826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.262835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.263182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.263191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.263439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.263449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.263734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.263744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.264113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.264123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.264403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.264413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.264784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.264794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.265155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.265165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.265527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.265538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.265835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.265845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.266199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.266211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.266561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.266571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.266926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.266935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.267264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.267274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.267632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.267641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.267986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.267996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.268367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.268377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.268729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.268739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.269082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.269092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.269424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.269434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.269791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.269801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.270158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.270168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.270519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.270530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.270876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.270886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.271235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.271245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.271476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.271485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.271864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.271873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.272196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.272205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.272663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.272674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.273021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.273030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.273374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.273384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.273746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.273756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.274105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.274114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.274466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.274476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.274855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.274866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.275208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.275217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.275565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.275574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.275933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.275942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.276297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.044 [2024-07-15 20:44:18.276308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.044 qpair failed and we were unable to recover it. 00:30:26.044 [2024-07-15 20:44:18.276523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.276533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.276880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.276890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.277186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.277196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.277553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.277563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.277910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.277919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.278274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.278284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.278641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.278650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.279024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.279034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.279365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.279376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.279742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.279752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.280102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.280112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.280416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.280426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.280749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.280758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.281124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.281133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.281484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.281495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.281842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.281852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.282196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.282205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.282446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.282456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.282788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.282798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.283162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.283172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.283511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.283521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.283882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.283891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.284262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.284272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.284668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.284678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.285001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.285011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.285336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.285346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.285693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.285702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.285947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.285956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.286331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.286341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.286692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.286702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.287037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.287046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.287431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.287441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.287761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.287770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.288140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.288150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.288356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.288367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.288726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.045 [2024-07-15 20:44:18.288736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.045 qpair failed and we were unable to recover it. 00:30:26.045 [2024-07-15 20:44:18.289117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.289126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.289479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.289489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.289833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.289844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.290192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.290203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.290554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.290563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.290912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.290921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.291292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.291302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.291647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.291657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.292012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.292022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.292375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.292385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.292728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.292738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.293085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.293094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.293439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.293449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.293815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.293825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.294173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.294182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.294434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.294444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.294697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.294706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.295071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.295080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.295498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.295508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.295875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.295884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.296208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.296218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.296631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.296641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.296973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.296983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.297327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.297338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.297705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.297715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.298079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.298089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.298444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.298455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.298835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.298845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.299168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.299177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.299497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.299506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.299865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.299876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.300220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.300234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.300607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.300617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.300972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.300981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.301321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.301332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.301697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.301706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.302053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.302063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.302409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.302419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.302741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.302751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.303101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.303111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.303451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.303461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.046 qpair failed and we were unable to recover it. 00:30:26.046 [2024-07-15 20:44:18.303805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.046 [2024-07-15 20:44:18.303815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.304162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.304172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.304369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.304380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.304725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.304735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.305083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.305093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.305429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.305439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.305773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.305783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.305984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.305994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.306379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.306389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.306743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.306753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.307096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.307105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.307453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.307463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.307811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.307821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.308190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.308200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.308543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.308553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.308904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.308914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.309306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.309317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.309679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.309689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.310049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.310059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.310434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.310443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.310812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.310822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.311181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.311190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.311536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.311546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.311893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.311902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.312253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.312263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.312602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.312611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.312964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.312973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.313316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.313327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.313694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.313704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.314073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.314082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.314436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.314446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.314789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.314798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.315142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.315152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.315434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.315444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.315822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.315832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.316032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.316042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.316408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.316418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.316760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.316769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.317134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.317144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.317493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.317503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.317821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.317830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.047 qpair failed and we were unable to recover it. 00:30:26.047 [2024-07-15 20:44:18.318199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.047 [2024-07-15 20:44:18.318208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.318624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.318634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.319001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.319012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.319370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.319381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.319690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.319700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.320065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.320074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.320420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.320430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.320781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.320791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.321135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.321144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.321485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.321495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.321822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.321831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.322190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.322199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.322558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.322568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.322952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.322962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.323308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.323319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.323662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.323672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.324043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.324056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.324478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.324488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.324829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.324839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.325182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.325192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.325461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.325471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.325805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.325814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.326118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.326127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.326380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.326390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.326749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.326759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.327080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.327089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.327340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.327351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.327700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.327709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.328092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.328102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.328528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.328538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.328756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.328765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.329106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.329116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.329477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.329487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.329849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.329858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.330178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.330187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.330555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.330565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.330915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.330925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.331340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.331350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.331729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.331738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.332098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.332108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.332282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.332294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.048 [2024-07-15 20:44:18.332591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.048 [2024-07-15 20:44:18.332601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.048 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.332953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.332963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.333313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.333325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.333666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.333675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.334020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.334030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.334377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.334388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.334710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.334720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.335097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.335107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.335465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.335475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.335923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.335932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.336256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.336266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.336590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.336600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.336944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.336953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.337142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.337152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.337483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.337493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.337839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.337849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.338202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.338212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.338587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.338597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.338940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.338950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.339137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.339148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.339499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.339509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.339857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.339866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.340222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.340236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.340619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.340629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.340976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.340985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.341354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.341364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.341706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.341716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.342081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.342091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.342541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.342551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.342899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.342910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.343261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.343271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.343504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.343514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.343881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.343891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.344243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.344253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.344588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.344597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.344943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.344953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.345340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.345350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.345790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.345800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.346168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.346178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.346433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.346442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.346814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.049 [2024-07-15 20:44:18.346824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.049 qpair failed and we were unable to recover it. 00:30:26.049 [2024-07-15 20:44:18.347128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.347138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.347526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.347536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.347781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.347791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.348138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.348148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.348487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.348497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.348821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.348830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.349176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.349185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.349560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.349571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.349868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.349878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.350133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.350144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.350517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.350528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.350875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.350884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.351241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.050 [2024-07-15 20:44:18.351251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.050 qpair failed and we were unable to recover it. 00:30:26.050 [2024-07-15 20:44:18.351582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.351591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.351960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.351970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.352337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.352348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.352689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.352699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.353068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.353077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.353420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.353431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.353767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.353777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.354124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.354134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.354511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.354521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.354883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.354893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.355094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.355103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.355456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.355466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.355712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.355721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.356103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.356113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.356433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.356443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.356749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.356759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.357100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.357111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.357460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.357471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.357813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.357822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.358187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.358197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.358541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.358552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.358912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.358921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.359271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.359281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.359656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.359665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.360006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.360016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.360368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.360378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.360714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.360723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.361083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.361093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.361508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.361518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.361869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.361878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.362179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.362189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.362519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.362529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.362874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.362884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.363227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.363243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.363450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.363460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.363804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.363813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.364155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.364165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.364488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.364498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.364843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.364852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.365211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.051 [2024-07-15 20:44:18.365221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.051 qpair failed and we were unable to recover it. 00:30:26.051 [2024-07-15 20:44:18.365987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.366012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.366280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.366292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.367125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.367145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.367481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.367496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.367861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.367871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.368223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.368237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.368623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.368634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.368995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.369005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.369362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.369373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.370504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.370528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.370878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.370888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.371209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.371219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.371571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.371581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.371926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.371936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.372297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.372307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.372697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.372707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.373030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.373040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.373396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.373407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.373755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.373765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.374098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.374107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.374477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.374486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.374822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.374832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.375195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.375204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.375527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.375537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.375884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.375893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.376240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.376250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.376594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.376603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.376979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.376990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.377350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.377360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.377738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.377747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.378093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.378105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.378356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.378366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.378704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.378713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.379071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.379081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.379446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.379456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.379780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.379789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.380198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.380208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.380579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.380589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.380944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.380953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.381300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.381310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.381692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.381703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.052 qpair failed and we were unable to recover it. 00:30:26.052 [2024-07-15 20:44:18.382045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.052 [2024-07-15 20:44:18.382054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.382476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.382485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.382823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.382832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.383179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.383188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.383522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.383532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.383890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.383900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.384253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.384263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.384617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.384626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.384967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.384976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.385339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.385349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.385724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.385733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.386055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.386065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.386409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.386419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.386767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.386778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.387019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.387029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.387306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.387316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.387703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.387713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.388058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.388068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.388430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.388440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.388672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.388683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.389064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.389074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.389367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.389377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.389780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.389789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.390019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.390029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.390377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.390386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.390711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.390720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.391051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.391061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.391436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.391447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.391770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.391780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.392018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.392028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.392376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.392387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.392741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.392751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.393071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.393081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.393302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.393313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.393656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.393666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.393907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.393916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.394236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.394246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.394485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.394495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.394689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.394698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.395012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.395021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.395279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.395289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.395628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.053 [2024-07-15 20:44:18.395639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.053 qpair failed and we were unable to recover it. 00:30:26.053 [2024-07-15 20:44:18.396173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.396189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.396543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.396555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.396905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.396915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.397147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.397157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.397697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.397713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.398053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.398064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.398442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.398453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.398800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.398809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.399043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.399052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.054 [2024-07-15 20:44:18.399428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.054 [2024-07-15 20:44:18.399438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.054 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.399683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.399694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.400044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.400055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.400402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.400412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.400772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.400781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.401181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.401191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.401342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.401359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.401773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.401782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.402368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.402383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.402732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.402742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.403091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.403100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.403543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.403553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.403840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.403849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.404200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.404211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.404545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.404555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.404906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.404916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.405282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.405294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.405572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.405582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.405950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.405960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.406167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.406177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.406591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.406602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.406928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.406938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.407336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.407347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.407664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.407673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.328 [2024-07-15 20:44:18.407997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.328 [2024-07-15 20:44:18.408007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.328 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.408388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.408398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.408762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.408772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.409101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.409110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.409409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.409419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.409768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.409777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.410139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.410148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.410490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.410500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.410817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.410827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.411172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.411185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.411461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.411471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.411812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.411822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.412237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.412248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.412651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.412660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.412983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.412993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.413337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.413347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.413758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.413767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.414084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.414094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.414398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.414408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.414751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.414761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.414974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.414984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.415408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.415419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.415752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.415762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.416109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.416119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.416411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.416421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.416751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.416760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.416992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.417003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.417344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.417355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.417689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.417698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.417935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.417945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.418284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.418293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.418719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.418728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.419049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.419058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.419337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.419348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.419707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.419716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.420033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.420043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.420304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.420317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.420662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.420672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.421010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.421019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.421227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.421243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.421586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.421596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.329 qpair failed and we were unable to recover it. 00:30:26.329 [2024-07-15 20:44:18.421838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.329 [2024-07-15 20:44:18.421848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.422189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.422198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.422531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.422541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.422869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.422879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.423203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.423212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.423590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.423600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.423916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.423926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.424270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.424281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.424679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.424689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.425022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.425031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.425244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.425254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.425617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.425627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.425949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.425960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.426283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.426293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.426634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.426643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.427049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.427058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.427429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.427439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.427684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.427694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.428049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.428058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.428404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.428415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.428720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.428730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.429037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.429046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.429391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.429401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.429762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.429772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.430092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.430102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.430535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.430546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.430879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.430888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.431212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.431221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.431611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.431621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.431945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.431954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.432330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.432340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.432698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.432707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.433039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.433049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.433377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.433387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.433707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.433716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.434126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.434136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.434469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.434485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.434818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.434827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.435148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.435158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.435485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.435495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.435828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.435838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.330 [2024-07-15 20:44:18.436066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.330 [2024-07-15 20:44:18.436076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.330 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.436424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.436435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.436794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.436805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.437088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.437098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.437333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.437342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.437739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.437748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.438071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.438080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.438397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.438407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.438780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.438790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.439098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.439107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.439429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.439439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.439773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.439783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.440144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.440154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.440495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.440505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.440864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.440875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.441137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.441147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.441481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.441491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.441785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.441794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.442127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.442136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.442498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.442507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.442830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.442839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.443085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.443095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.443465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.443478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.443822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.443832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.444204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.444214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.444496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.444505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.444825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.444834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.445154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.445163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.445488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.445498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.445823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.445832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.446156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.446165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.446499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.446509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.446907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.446917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.447256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.447266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.447560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.447571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.448008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.448017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.448256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.448266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.448595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.448604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.448937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.448946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.449314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.449324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.449662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.449671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.331 [2024-07-15 20:44:18.449989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.331 [2024-07-15 20:44:18.449998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.331 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.450325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.450336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.450652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.450661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.450988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.450997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.451328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.451338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.451702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.451711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.452030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.452040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.452239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.452250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.452602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.452614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.452844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.452854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.453198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.453207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.453537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.453547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.453907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.453917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.454275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.454285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.454650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.454660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.454986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.454995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.455315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.455324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.455752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.455762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.456083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.456093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.456413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.456423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.456650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.456660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.457018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.457028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.457285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.457295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.457507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.457517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.457859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.457868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.458192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.458201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.458567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.458577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.458905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.458915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.459276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.459285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.459633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.459642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.459962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.459972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.460307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.460318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.460621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.460632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.460981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.460991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.332 qpair failed and we were unable to recover it. 00:30:26.332 [2024-07-15 20:44:18.461237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.332 [2024-07-15 20:44:18.461247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.461575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.461585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.461976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.461985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.462305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.462315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.462598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.462608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.462949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.462958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.463276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.463286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.463532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.463542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.463881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.463890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.464213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.464222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.464545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.464555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.464903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.464912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.465269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.465280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.465490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.465499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.465839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.465848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.466246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.466257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.466591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.466601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.466964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.466974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.467304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.467314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.467653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.467663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.467883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.467892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.468269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.468279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.468623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.468632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.468956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.468965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.469201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.469212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.469596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.469606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.469923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.469933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.470294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.470303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.470478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.470489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.470858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.470868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.471209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.471218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.471459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.471474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.471853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.471863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.472211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.472221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.472528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.472538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.472859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.472869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.473235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.473246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.473597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.473607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.473910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.473919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.474278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.474288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.474530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.333 [2024-07-15 20:44:18.474539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.333 qpair failed and we were unable to recover it. 00:30:26.333 [2024-07-15 20:44:18.474786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.474795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.475170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.475181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.475362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.475373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.475681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.475690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.476046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.476055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.476403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.476413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.476739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.476748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.477102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.477111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.477462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.477472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.477831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.477841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.478038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.478048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.478434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.478444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.478821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.478830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.479199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.479208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.479445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.479455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.479853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.479864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.480213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.480222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.480614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.480623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.480994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.481004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.481347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.481357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.481694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.481703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.482017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.482027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.482347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.482357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.482721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.482730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.483071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.483080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.483409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.483419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.483610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.483619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.483997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.484006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.484354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.484366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.484722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.484731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.485083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.485092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.485439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.485449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.485805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.485814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.486056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.486065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.486386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.486396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.486584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.486594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.486816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.486825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.487215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.487224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.487544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.487554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.487893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.487903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.488250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.488260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.334 [2024-07-15 20:44:18.488535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.334 [2024-07-15 20:44:18.488545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.334 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.488870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.488879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.489067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.489076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.489449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.489459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.489665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.489674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.490002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.490011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.490210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.490220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.490564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.490574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.490966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.490975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.491319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.491329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.491678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.491687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.492031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.492040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.492415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.492425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.492775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.492784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.493125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.493136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.493325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.493336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.493583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.493593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.494006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.494015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.494443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.494453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.494840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.494849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.495194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.495203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.495525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.495535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.495672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.495681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.496052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.496061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.496427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.496436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.496781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.496790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.497148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.497157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.497501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.497511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.497855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.497865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.498211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.498220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.498566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.498575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.498933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.498942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.499286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.499297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.499685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.499694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.500027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.500036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.500369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.500379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.500702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.500711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.501028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.501037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.501281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.501290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.501584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.501593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.501950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.501960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.502306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.335 [2024-07-15 20:44:18.502315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.335 qpair failed and we were unable to recover it. 00:30:26.335 [2024-07-15 20:44:18.502665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.502674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.503029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.503038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.503398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.503408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.503734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.503743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.504122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.504131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.504512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.504522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.504758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.504767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.505137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.505146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.505501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.505510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.505859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.505869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.506213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.506222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.506647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.506657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.506848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.506860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.507198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.507210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.507558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.507569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.507936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.507947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.508271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.508282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.508624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.508633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.508837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.508847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.509175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.509184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.509492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.509501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.509858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.509867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.510218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.510228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.510477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.510486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.510823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.510833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.511208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.511217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.511481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.511492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.511829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.511838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.512186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.512195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.512543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.512553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.512902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.512912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.513263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.513273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.513602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.513611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.513969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.513978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.514323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.514334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.514699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.514709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.515091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.515100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.515455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.336 [2024-07-15 20:44:18.515464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.336 qpair failed and we were unable to recover it. 00:30:26.336 [2024-07-15 20:44:18.515807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.515816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.516184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.516194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.516432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.516444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.516870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.516880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.517210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.517219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.517484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.517493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.517847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.517856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.518200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.518209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.518563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.518573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.518928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.518938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.519292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.519302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.519669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.519678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.519885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.519894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.520239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.520249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.520588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.520597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.520943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.520953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.521160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.521169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.521488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.521498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.521820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.521830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.522176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.522185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.522554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.522564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.522885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.522895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.523249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.523259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.523603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.523612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.523853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.523862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.524212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.524221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.524611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.524621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.524967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.524977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.525183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.525193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.525616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.525628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.525947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.525957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.526308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.526318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.526665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.526674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.527028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.527038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.527225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.527237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.527471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.527481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.527864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.527874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.528196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.528205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.337 qpair failed and we were unable to recover it. 00:30:26.337 [2024-07-15 20:44:18.528573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.337 [2024-07-15 20:44:18.528583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.528928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.528937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.529263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.529274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.529606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.529615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.529978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.529987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.530340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.530350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.530709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.530718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.531073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.531082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.531458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.531468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.531697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.531707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.531938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.531948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.532319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.532328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.532652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.532661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.532986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.532996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.533346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.533356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.533696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.533705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.534066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.534075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.534423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.534433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.534818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.534827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.535124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.535134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.535383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.535392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.535600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.535609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.535929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.535939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.536263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.536273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.536594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.536603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.536951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.536960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.537306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.537316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.537680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.537690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.537891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.537902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.538243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.538254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.538644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.538653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.538888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.538897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.539255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.539265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.539644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.539654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.540001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.540011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.540269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.540279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.540473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.540483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.540847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.540856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.541203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.541212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.541553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.541563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.541921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.541931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.338 [2024-07-15 20:44:18.542279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.338 [2024-07-15 20:44:18.542289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.338 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.542461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.542471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.542788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.542797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.543131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.543140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.543492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.543501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.543854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.543864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.544208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.544217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.544573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.544583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.544943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.544952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.545294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.545303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.545671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.545681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.546024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.546034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.546238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.546249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.546584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.546594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.546918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.546928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.547273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.547284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.547624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.547633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.547985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.547994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.548341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.548354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.548681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.548691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.549058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.549067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.549409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.549419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.549781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.549790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.550133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.550143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.550521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.550531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.550913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.550922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.551165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.551174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.551552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.551561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.551907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.551916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.552279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.552289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.552610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.552620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.552976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.552986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.553329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.553339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.553581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.553590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.553913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.553923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.554269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.554278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.554621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.554631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.554975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.554985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.555343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.555354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.555702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.555712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.556057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.556066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.556446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.339 [2024-07-15 20:44:18.556456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.339 qpair failed and we were unable to recover it. 00:30:26.339 [2024-07-15 20:44:18.556798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.556807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.557163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.557173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.557489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.557499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.557848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.557859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.558119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.558129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.558498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.558509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.558879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.558889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.559239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.559249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.559570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.559579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.559950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.559959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.560304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.560314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.560666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.560675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.561056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.561066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.561409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.561420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.561743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.561753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.562094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.562103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.562468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.562478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.562686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.562696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.563051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.563060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.563402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.563413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.563742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.563752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.563957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.563966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.564297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.564307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.564691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.564700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.565019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.565029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.565398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.565408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.565756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.565766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.566107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.566116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.566490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.566500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.566697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.566706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.567020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.567030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.567395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.567406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.567729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.567738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.568081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.568090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.568425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.568435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.568670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.568679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.569011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.569020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.569377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.569387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.569757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.569767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.570142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.570152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.570485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.570495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.340 [2024-07-15 20:44:18.570859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.340 [2024-07-15 20:44:18.570868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.340 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.571206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.571215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.571588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.571598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.571920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.571930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.572268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.572278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.572518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.572527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.572882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.572891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.573237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.573246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.573575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.573584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.573933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.573942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.574154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.574163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.574415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.574424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.574768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.574777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.574978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.574987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.575318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.575328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.575530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.575540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.575860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.575870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.576233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.576244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.576599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.576609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.576962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.576971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.577340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.577350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.577683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.577692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.578038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.578048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.578424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.578435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.578759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.578768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.579128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.579137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.579470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.579480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.579801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.579810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.580149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.580159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.580535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.580545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.580886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.580898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.581244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.581254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.581615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.581624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.581919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.581928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.582276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.582286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.582663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.582673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.341 qpair failed and we were unable to recover it. 00:30:26.341 [2024-07-15 20:44:18.582996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.341 [2024-07-15 20:44:18.583005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.583353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.583363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.583570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.583580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.583796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.583806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.584124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.584134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.584479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.584489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.584834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.584843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.585212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.585221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.585571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.585581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.585929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.585939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.586288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.586298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.586547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.586557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.586761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.586772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.586943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.586952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.587287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.587297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.587762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.587771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.588049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.588058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.588401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.588411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.588761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.588771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.589118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.589128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.589513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.589523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.589845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.589856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.590206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.590216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.590600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.590610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.590954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.590964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.591316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.591326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.591712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.591722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.592066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.592075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.592418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.592428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.592777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.592788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.593129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.593138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.593485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.593495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.593866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.593875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.594221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.594234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.594577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.594587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.594922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.594932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.595277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.595288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.595485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.595494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.595854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.595863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.596206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.596215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.596439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.596449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.342 [2024-07-15 20:44:18.596635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.342 [2024-07-15 20:44:18.596644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.342 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.596948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.596957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.597330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.597341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.597669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.597679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.598031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.598041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.598393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.598403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.598751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.598760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.598965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.598977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.599313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.599323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.599565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.599575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.599962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.599971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.600170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.600181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.600554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.600564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.600915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.600924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.601270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.601280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.601464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.601473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.601668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.601678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.601990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.601999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.602377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.602387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.602573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.602582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.602905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.602914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.603260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.603270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.603459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.603468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.603783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.603792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.604138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.604148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.604519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.604529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.604940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.604949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.605282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.605292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.605544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.605553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.605747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.605756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.606013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.606023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.606278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.606288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.606679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.606689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.607035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.607044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.607295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.607305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.607631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.607641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.607997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.608007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.608346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.608356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.608688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.608697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.608937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.608946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.609284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.609294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.609682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.343 [2024-07-15 20:44:18.609691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.343 qpair failed and we were unable to recover it. 00:30:26.343 [2024-07-15 20:44:18.609982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.609992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.610322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.610332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.610699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.610708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.611077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.611086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.611430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.611440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.611791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.611800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.612217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.612227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.612472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.612482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.612860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.612869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.613200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.613210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.613435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.613445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.613791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.613800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.614149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.614158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.614524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.614535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.614862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.614871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.615217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.615226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.615563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.615573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.615753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.615764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.616105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.616115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.616464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.616474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.616819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.616828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.617172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.617182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.617550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.617560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.617901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.617910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.618262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.618272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.618610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.618620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.618984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.618993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.619338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.619348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.619697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.619706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.619903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.619913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.620243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.620253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1538977 Killed "${NVMF_APP[@]}" "$@" 00:30:26.344 [2024-07-15 20:44:18.620603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.620612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.620958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.620967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.621313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.621324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 wit 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:26.344 h addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.621690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.621700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:26.344 [2024-07-15 20:44:18.622078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.622088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:26.344 [2024-07-15 20:44:18.622440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.622451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.344 [2024-07-15 20:44:18.622642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.622653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.622922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.344 [2024-07-15 20:44:18.622932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.344 qpair failed and we were unable to recover it. 00:30:26.344 [2024-07-15 20:44:18.623301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.623311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.623665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.623674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.624040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.624049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.624387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.624397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.624739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.624748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.625124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.625136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.625482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.625492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.625846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.625856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.626207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.626216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.626654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.626664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.627028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.627037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.627380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.627392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.627755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.627766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.628121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.628132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.628281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.628292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.628637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.628648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.629011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.629022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.629326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.629336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.629720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.629731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1539998 00:30:26.345 [2024-07-15 20:44:18.630071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.630083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1539998 00:30:26.345 [2024-07-15 20:44:18.630397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.630409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1539998 ']' 00:30:26.345 [2024-07-15 20:44:18.630687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.630698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.630903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.630914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:26.345 [2024-07-15 20:44:18.631161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.631172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:26.345 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:26.345 [2024-07-15 20:44:18.631507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.631518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:26.345 20:44:18 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:26.345 [2024-07-15 20:44:18.631854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.631865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.632201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.632213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.632562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.632576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.632909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.632919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.633236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.633247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.633593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.633603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.633921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.633932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.345 qpair failed and we were unable to recover it. 00:30:26.345 [2024-07-15 20:44:18.634300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.345 [2024-07-15 20:44:18.634311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.634685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.634696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.634880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.634890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.635122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.635133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.635490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.635501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.635882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.635892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.636206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.636215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.636564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.636574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.636896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.636906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.637222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.637241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.637566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.637576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.637909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.637919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.638245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.638255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.638626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.638636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.638988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.638999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.639333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.639345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.639676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.639686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.640063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.640072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.640392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.640402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.640740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.640749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.641101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.641111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.641447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.641456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.641777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.641787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.642126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.642137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.642334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.642345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.642690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.642701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.643029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.643039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.643378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.643388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.643672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.643681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.643997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.644006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.644375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.644385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.644721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.644731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.645055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.645064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.645258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.645267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.645676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.645685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.646004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.646013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.646309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.646319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.646642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.646652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.646862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.646872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.647215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.647226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.647627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.647637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.346 [2024-07-15 20:44:18.647875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.346 [2024-07-15 20:44:18.647884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.346 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.648207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.648217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.648561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.648571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.648890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.648900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.649206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.649216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.649541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.649552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.649905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.649916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.650241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.650252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.650641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.650650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.650965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.650975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.651298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.651309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.651520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.651530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.651859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.651868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.652182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.652192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.652531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.652540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.652907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.652918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.653158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.653169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.653574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.653585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.653904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.653914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.654177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.654188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.654541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.654552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.654872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.654881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.655195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.655205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.655493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.655503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.655755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.655766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.656017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.656027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.656213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.656223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.656445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.656454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.656772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.656783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.657028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.657039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.657371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.657382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.657634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.657644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.657958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.657967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.658273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.658285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.658677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.658687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.659024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.659033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.659437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.659450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.659786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.659795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.660160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.660169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.660492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.660502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.660713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.660722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.660941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.347 [2024-07-15 20:44:18.660951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.347 qpair failed and we were unable to recover it. 00:30:26.347 [2024-07-15 20:44:18.661319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.661328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.661706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.661716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.662062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.662071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.662318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.662328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.662655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.662664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.662979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.662988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.663364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.663374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.663592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.663601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.663946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.663956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.664264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.664274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.664710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.664721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.665065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.665074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.665438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.665448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.665781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.665791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.666135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.666144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.666291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.666302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.666689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.666698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.667038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.667047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.667430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.667440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.667788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.667797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.668129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.668138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.668465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.668477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.668688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.668699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.669017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.669027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.669372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.669382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.669779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.669788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.670131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.670140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.670514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.670524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.670837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.670846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.671219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.671228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.671441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.671452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.671663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.671673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.672030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.672039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.672379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.672389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.672772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.672782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.673097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.673106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.673332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.673341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.673676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.673686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.674036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.674046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.674381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.674391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.674578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.674587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.348 [2024-07-15 20:44:18.674925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.348 [2024-07-15 20:44:18.674935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.348 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.675282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.675292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.675671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.675681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.675872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.675881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.676189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.676199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.676599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.676609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.676951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.676960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.677306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.677316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.677652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.677661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.678006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.678016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.678383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.678394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.678748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.678758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.679108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.679117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.679308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.679318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.679662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.679671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.680018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.680027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.680422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.680433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.680823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.680833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.681026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.681035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.681425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.681435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.681786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.681796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.682162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.682171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.682213] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:30:26.349 [2024-07-15 20:44:18.682263] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:26.349 [2024-07-15 20:44:18.682525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.682534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.682875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.682884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.683210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.683220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.683557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.683566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.683930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.683939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.684280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.684290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.684652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.684662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.685012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.685021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.685392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.685402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.685789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.685799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.686162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.686171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.686510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.686522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.686721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.349 [2024-07-15 20:44:18.686730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.349 qpair failed and we were unable to recover it. 00:30:26.349 [2024-07-15 20:44:18.687026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.687036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.687412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.687422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.687721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.687730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.688064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.688074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.688323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.688333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.688700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.688710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.688908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.688918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.689245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.689256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.689566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.689576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.689911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.689920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.690110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.690121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.690460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.690470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.690795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.690804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.691141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.691151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.691503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.691512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.691771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.691780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.692150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.692160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.692491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.692501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.692850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.692859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.693209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.693219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.693567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.693576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.693777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.693788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.694129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.694138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.694465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.694475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.350 [2024-07-15 20:44:18.694721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.350 [2024-07-15 20:44:18.694730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.350 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.695085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.695096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.695471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.695481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.695793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.695802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.696154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.696163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.696509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.696519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.696868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.696877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.697223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.697237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.697583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.697593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.697966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.697975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.698330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.698340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.698700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.698709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.699012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.699022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.699344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.699354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.699690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.699699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.700060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.700070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.700281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.700292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.700655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.700664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.700815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.700825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.701195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.701204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.701556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.701565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.621 qpair failed and we were unable to recover it. 00:30:26.621 [2024-07-15 20:44:18.701771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.621 [2024-07-15 20:44:18.701781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.701999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.702008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.702153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.702162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.702502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.702512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.702883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.702893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.703267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.703278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.703660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.703670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.703871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.703880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.704212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.704221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.704569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.704579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.704926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.704935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.705300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.705310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.705664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.705673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.706018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.706027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.706388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.706398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.706586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.706595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.706905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.706914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.707267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.707277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.707550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.707560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.707874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.707883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.708237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.708247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.708589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.708601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.708807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.708817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.708993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.709003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.709360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.709370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.709738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.709747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.709966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.709975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.710312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.710322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.710654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.710664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.711025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.711035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.711401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.711411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.711780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.711790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.712124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.712134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.712489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.712499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.712855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.712864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.713166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.713175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.713509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.713519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.713887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.713897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.714098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.714107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.714515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.714525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.714889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.714898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.622 qpair failed and we were unable to recover it. 00:30:26.622 [2024-07-15 20:44:18.715236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.622 [2024-07-15 20:44:18.715246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.715623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.715632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.715867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.715876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.716202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.716213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.716554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.716564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.716910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.716919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.717282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.717293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.717640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.717651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.717895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.717905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.718241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.718251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.718570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.718580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 EAL: No free 2048 kB hugepages reported on node 1 00:30:26.623 [2024-07-15 20:44:18.718915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.718925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.719271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.719281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.719666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.719676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.720011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.720020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.720368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.720378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.720747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.720756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.721072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.721082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.721425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.721436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.721831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.721841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.722165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.722175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.722513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.722523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.722870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.722879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.723177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.723187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.723409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.723419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.723639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.723648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.723980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.723989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.724328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.724338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.724691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.724701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.725074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.725083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.725297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.725307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.725572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.725582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.725935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.725944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.726246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.726256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.726472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.726483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.726801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.726810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.727175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.727185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.727557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.727569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.727942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.727951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.728187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.728197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.728561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.728571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.623 qpair failed and we were unable to recover it. 00:30:26.623 [2024-07-15 20:44:18.728932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.623 [2024-07-15 20:44:18.728942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.729297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.729308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.729659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.729668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.729982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.729992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.730327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.730337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.730689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.730699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.731049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.731059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.731411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.731421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.731660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.731669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.732034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.732043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.732279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.732290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.732638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.732648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.732981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.732991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.733344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.733355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.733668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.733678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.734063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.734073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.734297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.734307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.734698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.734707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.735039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.735049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.735299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.735309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.735658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.735670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.736013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.736023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.736377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.736387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.736708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.736717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.737049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.737058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.737431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.737441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.737795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.737805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.737877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.737886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.738237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.738247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.738543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.738552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.738900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.738910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.739250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.739261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.739602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.739612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.739809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.739820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.740155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.740165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.740527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.740537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.740777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.740787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.741132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.741141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.741524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.741534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.741899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.741909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.742261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.742271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.624 [2024-07-15 20:44:18.742635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.624 [2024-07-15 20:44:18.742644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.624 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.742989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.742999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.743349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.743360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.743697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.743707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.744113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.744123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.744483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.744493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.744805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.744815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.745156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.745167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.745559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.745569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.745903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.745913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.746220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.746233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.746392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.746403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.746849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.746859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.747046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.747056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.747453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.747463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.747821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.747830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.748092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.748101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.748301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.748312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.748725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.748735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.749138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.749148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.749502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.749512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.749900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.749910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.750275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.750285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.750657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.750667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.751024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.751034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.751417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.751428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.751762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.751771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.752025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.752035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.752414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.752424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.752773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.752783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.753169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.753179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.753530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.753540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.753886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.753895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.754246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.754256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.625 [2024-07-15 20:44:18.754623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.625 [2024-07-15 20:44:18.754633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.625 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.755007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.755017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.755383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.755393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.755776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.755785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.756128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.756137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.756356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.756366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.756725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.756734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.757114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.757123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.757508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.757518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.757872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.757882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.758100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.758110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.758327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.758337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.758696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.758706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.759086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.759097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.759441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.759452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.759657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.759667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.759863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.759873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.760055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.760065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.760382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.760392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.760790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.760800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.761135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.761144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.761518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.761528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.761858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.761868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.762057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.762067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.762292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.762302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.762648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.762659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.762979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.762989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.763379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.763389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.763735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.763744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.764093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.764102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.764294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.764305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.764646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.764655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.765062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.765072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.765415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.765426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.765542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.765552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.765848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.765858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.766250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.766260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.766616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.766625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.766979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.766989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.767342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.767353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.767561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.767573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.767932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.626 [2024-07-15 20:44:18.767941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.626 qpair failed and we were unable to recover it. 00:30:26.626 [2024-07-15 20:44:18.768199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.768209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.768547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.768558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.768750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.768761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.769112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.769122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.769470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.769480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.769827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.769836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.770191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.770200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.770558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.770569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.770919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.770928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.771280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.771290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.771567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.771576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.771941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.771951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.772243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.772254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.772592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.772602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.772949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.772958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.773303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.773313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.773666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.773675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.774021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.774030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.774404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.774414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.774755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.774765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.775127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.775136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.775445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.775455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.775794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.775803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.776149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.776159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.776574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.776584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.776855] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:26.627 [2024-07-15 20:44:18.776919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.776931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.777288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.777298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.777418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.777428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.777757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.777767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.778141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.778151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.778356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.778367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.778814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.778825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.779112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.779121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.779445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.779455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.779818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.779828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.780157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.780167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.780530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.780541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.780900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.780911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.781238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.781248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.781569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.781579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.781899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.781909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.627 qpair failed and we were unable to recover it. 00:30:26.627 [2024-07-15 20:44:18.782237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.627 [2024-07-15 20:44:18.782248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.782570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.782579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.782902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.782912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.783194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.783204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.783404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.783415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.783760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.783770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.784145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.784155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.784491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.784501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.784822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.784832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.785157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.785166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.785546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.785556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.785875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.785886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.786210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.786220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.786626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.786635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.786871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.786880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.787165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.787174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.787523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.787533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.787854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.787863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.788188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.788197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.788588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.788599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.788932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.788942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.789261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.789272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.789619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.789629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.789947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.789956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.790315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.790326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.790518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.790528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.790841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.790851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.791187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.791196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.791520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.791530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.791612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.791621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.791942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.791951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.792309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.792319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.792522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.792532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.792852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.792862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.793209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.793218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.793558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.793568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.793889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.793898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.794227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.794244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.794574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.794585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.794974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.794983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.795311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.795321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.628 [2024-07-15 20:44:18.795679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.628 [2024-07-15 20:44:18.795689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.628 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.795882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.795892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.796079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.796089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.796407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.796417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.796749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.796759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.797078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.797088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.797424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.797434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.797676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.797686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.797999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.798010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.798247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.798257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.798479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.798488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.798870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.798880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.799252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.799262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.799608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.799618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.799937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.799946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.800204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.800213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.800545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.800554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.800905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.800914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.801290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.801301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.801638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.801647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.801835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.801844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.802024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.802033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.802387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.802397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.802612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.802621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.803014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.803023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.803259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.803270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.803622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.803631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.803993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.804003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.804347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.804357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.804701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.804710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.805060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.805070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.805319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.805328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.805700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.805709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.806039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.806049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.806344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.806353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.806727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.806737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.807108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.807118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.807468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.807479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.807860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.807870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.808102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.808112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.629 qpair failed and we were unable to recover it. 00:30:26.629 [2024-07-15 20:44:18.808439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.629 [2024-07-15 20:44:18.808449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.808872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.808882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.809262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.809273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.809634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.809645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.809895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.809906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.810254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.810264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.810586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.810596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.811022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.811032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.811405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.811416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.811755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.811766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.812152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.812162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.812535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.812545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.812870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.812880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.813233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.813243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.813583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.813593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.813796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.813805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.814037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.814046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.814435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.814446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.814793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.814802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.815116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.815125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.815515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.815525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.815874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.815884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.816235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.816244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.816480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.816489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.816835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.816845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.817190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.817202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.817456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.817466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.817708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.817717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.818059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.818068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.818260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.818271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.818625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.818635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.818984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.818993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.819309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.819319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.819536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.819545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.819897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.819907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.820253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.820263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.820583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.820593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.820917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.820926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.630 qpair failed and we were unable to recover it. 00:30:26.630 [2024-07-15 20:44:18.821269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.630 [2024-07-15 20:44:18.821279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.821650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.821660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.821985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.821995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.822356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.822366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.822740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.822749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.823094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.823103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.823475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.823485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.823833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.823842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.824187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.824196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.824433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.824443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.824787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.824796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.825141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.825150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.825444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.825454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.825830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.825840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.826192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.826204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.826567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.826577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.826929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.826939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.827299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.827309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.827659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.827668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.827991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.828000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.828343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.828353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.828676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.828685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.829052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.829061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.829395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.829405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.829618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.829628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.829849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.829858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.830270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.830280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.830481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.830490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.830862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.830872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.831075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.831084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.831442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.831452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.831679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.831688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.832106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.832115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.832491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.832501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.832823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.832833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.833192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.833202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.833578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.833588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.833993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.834003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.834352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.834362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.834671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.834680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.835040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.835049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.631 [2024-07-15 20:44:18.835428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.631 [2024-07-15 20:44:18.835438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.631 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.835617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.835627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.836034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.836044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.836456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.836466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.836817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.836827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.837132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.837141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.837488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.837498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.837845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.837854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.838236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.838247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.838571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.838581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.838935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.838945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.839322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.839333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.839695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.839704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.840022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.840031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.840377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.840387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.840741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.840750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.841097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.841107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.841310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.841320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.841659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.841669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.841850] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:26.632 [2024-07-15 20:44:18.841879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:26.632 [2024-07-15 20:44:18.841886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:26.632 [2024-07-15 20:44:18.841892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:26.632 [2024-07-15 20:44:18.841898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:26.632 [2024-07-15 20:44:18.842017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.842026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.842070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:30:26.632 [2024-07-15 20:44:18.842359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.842369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.842270] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:30:26.632 [2024-07-15 20:44:18.842437] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:30:26.632 [2024-07-15 20:44:18.842438] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:30:26.632 [2024-07-15 20:44:18.842596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.842606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.842871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.842880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.843253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.843263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.843334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.843347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.843607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.843617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.843895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.843904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.844251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.844261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.844593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.844602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.844824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.844833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.845162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.845172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.845571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.845582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.845961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.845971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.846322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.846332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.846450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.846459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.846700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.846709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.847034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.847043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.847251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.847261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.632 [2024-07-15 20:44:18.847664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.632 [2024-07-15 20:44:18.847674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.632 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.847744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.847753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.847994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.848003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.848398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.848409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.848763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.848773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.848979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.848989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.849389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.849399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.849630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.849640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.849831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.849840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.850203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.850212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.850553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.850564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.850912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.850921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.851272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.851282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.851608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.851617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.851810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.851819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.852174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.852184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.852563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.852573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.852760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.852769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.853052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.853061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.853391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.853401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.853785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.853795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.854129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.854139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.854514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.854524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.854873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.854882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.855234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.855244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.855580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.855589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.855956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.855966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.856312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.856322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.856675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.856685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.857034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.857043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.857325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.857336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.857738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.857748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.858098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.858108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.858463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.858473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.858671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.858681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.858883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.858892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.859232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.859243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.859642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.859652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.859990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.860000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.860348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.860359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.860585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.860595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.860783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.633 [2024-07-15 20:44:18.860793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.633 qpair failed and we were unable to recover it. 00:30:26.633 [2024-07-15 20:44:18.861006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.861016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.861372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.861382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.861734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.861744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.861994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.862004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.862361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.862371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.862577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.862587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.862805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.862814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.863098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.863107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.863468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.863478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.863861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.863871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.864239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.864250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.864648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.864658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.864817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.864828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.865195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.865205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.865554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.865565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.865807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.865816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.866156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.866165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.866417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.866427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.866770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.866779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.867118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.867128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.867390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.867400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.867753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.867763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.868122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.868132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.868233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.868243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.868549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.868558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.868951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.868961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.869286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.869297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.869565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.869574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.869960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.869970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.870304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.870313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.870678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.870688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.871063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.871072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.871450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.871460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.871806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.871816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.872016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.872027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.872221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.872237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.634 qpair failed and we were unable to recover it. 00:30:26.634 [2024-07-15 20:44:18.872497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.634 [2024-07-15 20:44:18.872506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.872900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.872909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.873154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.873163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.873512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.873524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.873901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.873911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.874113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.874122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.874304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.874313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.874703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.874712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.875066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.875076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.875549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.875559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.875761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.875771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.876161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.876170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.876518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.876528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.876869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.876878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.877237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.877247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.877601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.877610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.877851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.877860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.878236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.878246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.878482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.878492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.878702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.878711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.878890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.878900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.879236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.879246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.879633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.879643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.880000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.880010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.880200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.880210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.880538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.880549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.880921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.880931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.881145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.881155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.881415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.881425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.881759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.881768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.881976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.881988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.882338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.882348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.882771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.882781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.883154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.883164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.883375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.883384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.883762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.883771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.884003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.884013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.884321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.884332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.884516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.884525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.884713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.884722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.884906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.884914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.635 [2024-07-15 20:44:18.885145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.635 [2024-07-15 20:44:18.885154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.635 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.885512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.885522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.885844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.885853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.886204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.886213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.886566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.886576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.886950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.886959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.887310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.887320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.887380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.887389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.887709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.887718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.888041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.888050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.888394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.888403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.888732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.888741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.889079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.889089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.889461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.889471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.889678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.889687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.890025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.890035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.890237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.890247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.890466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.890475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.890850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.890859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.891187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.891197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.891556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.891566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.891770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.891779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.892131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.892140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.892343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.892353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.892709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.892718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.892961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.892971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.893337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.893347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.893733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.893742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.894114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.894124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.894484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.894494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.894849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.894858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.895211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.895220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.895582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.895592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.895910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.895920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.896332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.896341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.896599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.896608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.896809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.896818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.897148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.897157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.897478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.897488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.897834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.897843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.898252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.898261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.898634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.636 [2024-07-15 20:44:18.898643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.636 qpair failed and we were unable to recover it. 00:30:26.636 [2024-07-15 20:44:18.898849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.898858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.899196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.899205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.899406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.899416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.899763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.899772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.900073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.900082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.900275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.900284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.900523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.900532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.900913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.900922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.900985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.900994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.901324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.901333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.901705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.901714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.902067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.902076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.902430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.902440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.902793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.902802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.903029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.903038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.903238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.903252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.903599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.903608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.903953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.903962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.904185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.904194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.904572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.904582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.904784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.904793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.905168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.905177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.905532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.905542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.905765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.905774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.906146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.906156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.906526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.906536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.906731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.906740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.907073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.907082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.907291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.907301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.907693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.907703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.907879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.907889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.908080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.908089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.908441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.908451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.908637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.908647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.908873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.908883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.909224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.909238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.909595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.909604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.909955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.909964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.910315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.910325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.910700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.910710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.911055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.911064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.637 [2024-07-15 20:44:18.911298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.637 [2024-07-15 20:44:18.911307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.637 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.911509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.911521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.911847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.911856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.912211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.912220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.912597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.912606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.912958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.912967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.913326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.913335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.913711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.913720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.913920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.913930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.914269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.914279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.914690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.914699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.915028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.915037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.915239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.915249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.915610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.915619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.915985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.915994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.916201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.916211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.916573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.916583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.916944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.916953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.917310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.917320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.917611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.917621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.917826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.917835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.918175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.918184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.918558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.918568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.918896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.918905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.919255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.919264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.919516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.919525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.919740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.919749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.920091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.920101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.920448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.920457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.920819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.920829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.921175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.921185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.921529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.921538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.921728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.921737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.922086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.922095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.922472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.922482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.922900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.922909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.638 [2024-07-15 20:44:18.923290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.638 [2024-07-15 20:44:18.923300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.638 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.923675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.923684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.923747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.923757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.923914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.923923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.924283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.924293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.924680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.924689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.924880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.924890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.925183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.925192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.925542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.925552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.925753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.925762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.925821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.925829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.926177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.926187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.926541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.926551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.926926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.926935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.927127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.927136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.927499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.927509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.927859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.927868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.928068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.928077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.928416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.928426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.928759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.928768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.928954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.928963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.929143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.929152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.929526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.929536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.929896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.929905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.930255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.930265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.930650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.930659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.930983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.930992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.931354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.931364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.931579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.931589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.931770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.931779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.932129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.932138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.932507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.932517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.932921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.932930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.933280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.933291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.933682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.933692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.934077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.934086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.934443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.934452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.934816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.934825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.935192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.935202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.935602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.935611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.935799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.935809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.639 [2024-07-15 20:44:18.936034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.639 [2024-07-15 20:44:18.936043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.639 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.936426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.936436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.936783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.936792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.937149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.937158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.937507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.937516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.937841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.937850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.938052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.938062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.938245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.938255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.938477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.938486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.938698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.938707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.939076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.939085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.939434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.939444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.939817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.939826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.940193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.940202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.940298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.940307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.940675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.940684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.941073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.941082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.941375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.941385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.941735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.941744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.942079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.942090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.942275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.942285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.942647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.942656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.943007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.943016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.943258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.943268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.943600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.943609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.943938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.943947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.944314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.944324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.944692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.944702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.945082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.945092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.945437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.945447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.945801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.945811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.945876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.945884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.946196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.946205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.946597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.946608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.946964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.946973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.947222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.947234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.947586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.947595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.947965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.947974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.948340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.948349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.948743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.948752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.949099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.949109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.949576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.640 [2024-07-15 20:44:18.949586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.640 qpair failed and we were unable to recover it. 00:30:26.640 [2024-07-15 20:44:18.949825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.949834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.950185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.950195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.950381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.950390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.950748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.950757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.951183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.951194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.951584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.951594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.951968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.951978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.952331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.952340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.952597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.952607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.952947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.952957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.953210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.953220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.953593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.953603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.953914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.953923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.954134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.954143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.954461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.954471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.954913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.954922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.955274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.955284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.955670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.955679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.956033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.956042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.956393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.956403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.956728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.956737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.956912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.956921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.957336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.957346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.957539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.957548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.957951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.957960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.958297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.958307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.958670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.958679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.959037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.959047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.959251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.959260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.959611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.959620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.959817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.959827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.960010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.960019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.960203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.960212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.960536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.960545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.960964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.960974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.961318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.961328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.961732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.961741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.961948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.961957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.962141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.962151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.962483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.962493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.962841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.962850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.641 [2024-07-15 20:44:18.963054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.641 [2024-07-15 20:44:18.963064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.641 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.963263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.963273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.963636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.963646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.963972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.963981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.964332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.964342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.964676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.964685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.965045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.965054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.965432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.965442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.965662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.965671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.966032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.966041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.966401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.966411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.966740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.966749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.966995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.967005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.967244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.967254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.967540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.967549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.967985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.967995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.968170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.968180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.968381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.968391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.968672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.968681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.969015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.969025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.969409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.969419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.969780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.969789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.970143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.970153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.970348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.970358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.970712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.970721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.971085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.971094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.971527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.971537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.971911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.971920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.972258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.972267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.972618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.972627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.972828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.972837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.973011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.973022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.973356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.973366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.973727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.973736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.973927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.973937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.974266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.974276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.642 [2024-07-15 20:44:18.974624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.642 [2024-07-15 20:44:18.974633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.642 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.974875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.974884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.975188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.975197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.975537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.975546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.975897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.975906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.976275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.976285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.976467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.976477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.976672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.976681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.977021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.977031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.977358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.977367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.977766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.977775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.978108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.978118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.978468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.978478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.978836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.978845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.979196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.979206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.979638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.979648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.979862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.979872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.980078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.980088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.980307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.980316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.980717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.980727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.980970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.980979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.981320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.981330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.981712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.981723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.982078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.982087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.982460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.982470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.982524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.982533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.982732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.982741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.983060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.983070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.983299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.983309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.983508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.983517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.983874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.983883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.984259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.984268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.984608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.984617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.984923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.984932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.985330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.985340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.985531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.985541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.985776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.985785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.986178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.986187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.986530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.986540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.986896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.986905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.987146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.987155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.987404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.643 [2024-07-15 20:44:18.987414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.643 qpair failed and we were unable to recover it. 00:30:26.643 [2024-07-15 20:44:18.987784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.644 [2024-07-15 20:44:18.987793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.644 qpair failed and we were unable to recover it. 00:30:26.644 [2024-07-15 20:44:18.988251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.644 [2024-07-15 20:44:18.988260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.644 qpair failed and we were unable to recover it. 00:30:26.644 [2024-07-15 20:44:18.988613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.644 [2024-07-15 20:44:18.988622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.644 qpair failed and we were unable to recover it. 00:30:26.644 [2024-07-15 20:44:18.989037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.644 [2024-07-15 20:44:18.989046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.644 qpair failed and we were unable to recover it. 00:30:26.644 [2024-07-15 20:44:18.989441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.644 [2024-07-15 20:44:18.989450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.644 qpair failed and we were unable to recover it. 00:30:26.644 [2024-07-15 20:44:18.989805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.644 [2024-07-15 20:44:18.989814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.644 qpair failed and we were unable to recover it. 00:30:26.644 [2024-07-15 20:44:18.990172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.990182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.990373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.990386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.990753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.990763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.991152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.991161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.991572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.991582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.991790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.991799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.992154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.992163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.992515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.992525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.992729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.992738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.992932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.992942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.993253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.993263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.993465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.993474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.993785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.993794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.994155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.994165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.994448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.994458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.994832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.994842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.995042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.995052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.931 [2024-07-15 20:44:18.995273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.931 [2024-07-15 20:44:18.995283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.931 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.995527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.995536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.995712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.995721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.995953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.995962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.996191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.996200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.996587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.996597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.996812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.996822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.997185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.997194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.997563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.997573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.997777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.997786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.998149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.998158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.998508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.998518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.998855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.998865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.999068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.999077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.999430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.999440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:18.999701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:18.999710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.000097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.000106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.000438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.000447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.000827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.000836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.001215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.001224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.001428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.001438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.001768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.001777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.002138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.002147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.002538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.002547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.002775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.002784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.003174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.003186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.003250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.003258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.003484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.003494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.003858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.003867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.004268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.004278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.004642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.004651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.004721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.004731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.005048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.005057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.005399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.005409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.005665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.005674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.006011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.006020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.006394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.006404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.006614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.006623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.006832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.006842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.007195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.007204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.007556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.007566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.932 [2024-07-15 20:44:19.007948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.932 [2024-07-15 20:44:19.007958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.932 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.008284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.008294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.008418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.008428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.008489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.008499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.008588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.008597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.008907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.008917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.009259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.009269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.009629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.009638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.009828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.009839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.010052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.010063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.010256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.010266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.010642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.010654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.010979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.010988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.011320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.011330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.011673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.011683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.012059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.012068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.012391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.012402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.012620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.012630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.012818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.012827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.013158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.013167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.013572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.013582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.013785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.013795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.014039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.014048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.014422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.014432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.014784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.014793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.014994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.015003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.015334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.015344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.015700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.015710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.015913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.015923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.016265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.016274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.016478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.016487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.016737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.016747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.017097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.017106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.017479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.017489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.017848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.017857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.018208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.018217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.018626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.018636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.018838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.018848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.019062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.019074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.019285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.019295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.019617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.019627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.019964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.933 [2024-07-15 20:44:19.019973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.933 qpair failed and we were unable to recover it. 00:30:26.933 [2024-07-15 20:44:19.020325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.020334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.020518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.020528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.020606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.020614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.020947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.020956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.021283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.021292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.021511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.021520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.021836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.021846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.022219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.022228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.022472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.022481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.022696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.022705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.023081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.023091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.023456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.023466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.023668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.023677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.023872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.023882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.024250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.024259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.024609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.024619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.024980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.024989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.025345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.025355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.025729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.025738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.025928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.025937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.026140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.026149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.026505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.026515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.026873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.026882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.027130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.027139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.027522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.027532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.027894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.027903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.028254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.028263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.028595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.028605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.028959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.028969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.029171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.029180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.029376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.029386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.029733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.029742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.029936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.029945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.030171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.030180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.030442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.030452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.030846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.030855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.031164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.031173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.031546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.031556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.031757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.031767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.031956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.031965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.032202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.934 [2024-07-15 20:44:19.032212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.934 qpair failed and we were unable to recover it. 00:30:26.934 [2024-07-15 20:44:19.032547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.032557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.032929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.032938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.033303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.033313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.033524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.033534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.033838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.033848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.033912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.033921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.034224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.034237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.034496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.034505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.034851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.034860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.035261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.035271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.035656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.035666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.036017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.036026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.036236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.036246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.036464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.036473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.036680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.036689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.037014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.037024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.037284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.037294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.037433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.037442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.037797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.037807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.038176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.038185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.038360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.038370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.038755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.038765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.039145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.039155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.039562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.039574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.039764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.039774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.039829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.039838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.040175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.040184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.040400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.040409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.040552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.040561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.040981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.040991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.041342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.041352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.041538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.041547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.041862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.041872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.042218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.042227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.935 [2024-07-15 20:44:19.042579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.935 [2024-07-15 20:44:19.042588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.935 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.042981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.042990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.043369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.043379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.043720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.043730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.044086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.044096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.044448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.044457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.044823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.044832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.045211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.045221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.045593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.045602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.045954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.045964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.046156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.046165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.046405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.046415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.046764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.046773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.046897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.046907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.047336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.047346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.047680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.047689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.048013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.048024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.048351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.048361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.048577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.048586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.048773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.048782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.049097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.049107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.049373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.049383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.049590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.049599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.049840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.049850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.050208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.050217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.050432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.050443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.050796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.050806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.051144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.051154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.051496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.051506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.051707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.051716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.052081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.052091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.052433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.052443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.052771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.052781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.052969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.052979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.053314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.053324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.053672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.053682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.053889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.053900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.054216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.054225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.054411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.054420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.054630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.054640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.054958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.054967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.055322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.936 [2024-07-15 20:44:19.055332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.936 qpair failed and we were unable to recover it. 00:30:26.936 [2024-07-15 20:44:19.055396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.055405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.055753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.055763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.055971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.055980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.056280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.056291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.056654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.056663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.056994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.057004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.057196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.057205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.057526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.057536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.057912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.057921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.058110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.058120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.058306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.058316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.058654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.058663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.059005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.059015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.059373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.059384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.059744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.059753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.060111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.060121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.060562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.060572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.060904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.060914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.061138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.061148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.061411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.061421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.061679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.061688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.061922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.061935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.062186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.062196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.062494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.062504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.062885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.062894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.063094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.063104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.063438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.063448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.063817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.063827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.064169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.064179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.064557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.064567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.064781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.064791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.065145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.065154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.065301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.065311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.065731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.065740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.066106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.066115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.066436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.066446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.066792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.066801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.067045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.067054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.067269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.067279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.067548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.067558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.067760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.067770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.937 [2024-07-15 20:44:19.068119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.937 [2024-07-15 20:44:19.068129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.937 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.068429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.068442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.068648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.068658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.069015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.069025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.069422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.069432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.069626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.069636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.069958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.069967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.070177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.070186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.070285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.070295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23b9a50 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Read completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 Write completed with error (sct=0, sc=8) 00:30:26.938 starting I/O failed 00:30:26.938 [2024-07-15 20:44:19.070515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:26.938 [2024-07-15 20:44:19.070896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.070907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.071285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.071299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.071561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.071568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.071761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.071767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.072080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.072087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.072346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.072352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.072732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.072739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.072939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.072947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.073249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.073256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.073559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.073566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.073902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.073908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.074247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.074254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.074661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.074670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.074923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.074929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.075134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.075140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.075361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.075368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.075611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.075617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.075927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.075934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.076155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.076162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.076538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.076545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.076930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.076937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.938 [2024-07-15 20:44:19.077275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.938 [2024-07-15 20:44:19.077281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.938 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.077605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.077611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.077938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.077944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.078296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.078303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.078643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.078649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.078971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.078978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.079300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.079308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.079606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.079613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.079810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.079816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.080154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.080160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.080548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.080555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.080845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.080852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.081168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.081174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.081382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.081389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.081444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.081451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.081810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.081816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.082000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.082008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.082314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.082322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.082523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.082529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.082865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.082872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.083090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.083098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.083453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.083461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.083782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.083789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.084108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.084115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.084546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.084553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.084709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.084725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.084878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.084885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.085006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.085013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.085319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.085327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.085707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.085714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.085901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.085908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.086241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.086250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.086468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.086475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.086853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.086859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.087247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.087254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.087451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.087458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.087658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.087664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.088023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.088030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.088079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.088085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.088432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.088438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.088826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.088833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.939 [2024-07-15 20:44:19.089045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.939 [2024-07-15 20:44:19.089051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.939 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.089322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.089329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.089676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.089683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.089900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.089907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.090109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.090116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.090174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.090181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.090511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.090518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.090849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.090856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.091125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.091132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.091462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.091469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.091694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.091700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.091903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.091910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.092193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.092199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.092408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.092415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.092784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.092791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.093128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.093134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.093328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.093336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.093521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.093527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.093850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.093857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.094187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.094193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.094562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.094569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.094641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.094647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.094767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.094774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.095125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.095132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.095481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.095488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.095815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.095821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.096142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.096149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.096349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.096357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.096531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.096538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.096711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.096718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.097086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.097094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.097278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.097285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.097611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.097617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.097954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.097960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.098335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.098342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.098670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.098677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.940 [2024-07-15 20:44:19.099007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.940 [2024-07-15 20:44:19.099014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.940 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.099343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.099350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.099695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.099701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.100059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.100067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.100279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.100285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.100458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.100465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.100747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.100754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.100959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.100966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.101207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.101213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.101632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.101639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.101973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.101980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.102299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.102306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.102623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.102629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.102828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.102835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.103108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.103114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.103446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.103453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.103794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.103801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.104000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.104007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.104347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.104353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.104740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.104746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.105072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.105078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.105410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.105417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.105603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.105610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.105947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.105953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.106141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.106148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.106509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.106515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.106840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.106847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.107170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.107176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.107393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.107400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.107761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.107767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.107980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.107986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.108190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.108197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.108530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.108537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.108860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.108866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.109189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.109198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.109296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.109304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.109692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.109699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.109900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.109906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.110266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.110273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.110645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.110651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.110872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.110879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.941 [2024-07-15 20:44:19.111242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.941 [2024-07-15 20:44:19.111249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.941 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.111573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.111579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.111947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.111954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.112154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.112162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.112347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.112354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.112706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.112713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.112910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.112916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.113127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.113134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.113571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.113578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.113898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.113904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.114228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.114237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.114415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.114422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.114656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.114663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.114716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.114722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.114858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.114871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.115187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.115194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.115569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.115577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.115833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.115840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.116157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.116164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.116550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.116557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.116878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.116886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.117263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.117270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.117714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.117721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.117902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.117910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.118220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.118226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.118638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.118645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.118967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.118974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.119312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.119318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.119495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.119502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.119560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.119568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.119934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.119941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.120148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.120154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.120373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.120379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.120740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.120748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.121082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.121089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.121421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.121427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.121771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.121778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.122116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.122124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.122476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.122482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.122817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.122824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.123155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.123161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.942 [2024-07-15 20:44:19.123357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.942 [2024-07-15 20:44:19.123365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.942 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.123539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.123546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.123795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.123802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.124060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.124066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.124116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.124123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.124480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.124487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.124882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.124888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.125241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.125248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.125437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.125444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.125848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.125855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.126180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.126186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.126385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.126393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.126741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.126747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.127072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.127079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.127278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.127286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.127494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.127501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.127853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.127860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.128181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.128188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.128525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.128531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.128724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.128731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.128897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.128904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.129138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.129145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.129465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.129471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.129796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.129803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.130127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.130133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.130471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.130478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.130682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.130688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.131033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.131040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.131396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.131403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.131732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.131738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.131906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.131912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.132119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.132126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.132227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.132241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.132425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.132433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.132787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.132795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.132984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.132992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.133333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.133339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.133670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.133676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.134008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.134015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.134299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.134306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.134650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.134657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.943 [2024-07-15 20:44:19.134987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.943 [2024-07-15 20:44:19.134994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.943 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.135191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.135199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.135541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.135548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.135737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.135744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.136130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.136137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.136504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.136511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.136836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.136842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.137178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.137185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.137384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.137392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.137748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.137754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.138168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.138174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.138498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.138505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.138706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.138713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.139093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.139099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.139467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.139474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.139818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.139825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.140024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.140031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.140201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.140208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.140434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.140442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.140814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.140821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.141132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.141139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.141495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.141502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.141798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.141805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.142173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.142179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.142597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.142603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.142850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.142858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.143288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.143295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.143592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.143599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.143819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.143826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.144141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.144148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.144404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.144411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.144501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.144508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.144716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.144723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.145048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.145054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.145271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.145277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.145737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.145744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.146062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.944 [2024-07-15 20:44:19.146069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.944 qpair failed and we were unable to recover it. 00:30:26.944 [2024-07-15 20:44:19.146399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.146406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.146750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.146757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.146979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.146986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.147224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.147234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.147532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.147538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.147608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.147614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.147921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.147927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.147983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.147989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.148171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.148178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.148528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.148534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.148872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.148878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.149220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.149227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.149475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.149481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.149694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.149700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.149922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.149929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.150142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.150148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.150346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.150355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.150421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.150428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.150779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.150786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.151130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.151137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.151332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.151340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.151690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.151696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.152064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.152071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.152273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.152280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.152695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.152701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.152902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.152908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.153174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.153181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.153512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.153519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.153848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.153855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.154191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.154197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.154399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.154406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.154666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.154673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.155008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.155015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.155196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.155203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.155536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.155544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.155869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.155875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.156212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.156218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.156549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.156556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.156888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.156894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.157217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.157223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.945 [2024-07-15 20:44:19.157465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.945 [2024-07-15 20:44:19.157472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.945 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.157815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.157822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.158020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.158027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.158389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.158396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.158766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.158773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.158986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.158994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.159339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.159346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.159562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.159569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.159760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.159766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.160186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.160192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.160376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.160383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.160782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.160789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.160844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.160850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.161157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.161164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.161353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.161359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.161609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.161616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.161790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.161802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.162010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.162017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.162430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.162437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.162761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.162768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.163089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.163096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.163564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.163571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.163888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.163894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.164101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.164108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.164305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.164311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.164633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.164640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.164873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.164880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.165214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.165221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.165411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.165417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.165773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.165779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.166014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.166021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.166384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.166391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.166571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.166577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.166768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.166775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.167079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.167087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.167301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.167309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.167628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.167634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.167973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.167979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.168308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.168315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.168534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.168541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.168785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.168791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.946 qpair failed and we were unable to recover it. 00:30:26.946 [2024-07-15 20:44:19.169123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.946 [2024-07-15 20:44:19.169130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.169173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.169179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.169584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.169591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.169922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.169928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.169970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.169977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.170298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.170305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.170645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.170652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.170973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.170980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.171311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.171319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.171667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.171674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.172019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.172026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.172248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.172255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.172562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.172569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.172764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.172771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.172935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.172942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.173132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.173139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.173515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.173522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.173870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.173876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.174163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.174170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.174357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.174364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.174594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.174601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.174961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.174967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.175293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.175306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.175612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.175619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.175947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.175954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.176208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.176214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.176498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.176505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.176818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.176825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.177004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.177011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.177321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.177328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.177541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.177547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.177728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.177735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.177902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.177909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.178225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.178236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.178421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.178428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.178597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.178603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.178911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.178917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.179174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.179181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.179495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.179502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.179855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.179862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.180283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.180290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.947 qpair failed and we were unable to recover it. 00:30:26.947 [2024-07-15 20:44:19.180480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.947 [2024-07-15 20:44:19.180486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.180811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.180818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.181031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.181037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.181222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.181228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.181542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.181549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.181867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.181874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.182079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.182085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.182271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.182277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.182622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.182629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.183009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.183015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.183242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.183249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.183602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.183609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.183825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.183834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.183987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.183994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.184413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.184420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.184759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.184765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.185095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.185102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.185514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.185520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.185843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.185849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.186123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.186130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.186479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.186486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.186667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.186674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.186989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.186995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.187187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.187195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.187538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.187544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.187763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.187770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.188135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.188141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.188497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.188503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.188693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.188701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.188925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.188933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.189272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.189279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.189485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.189492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.189799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.189806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.189863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.189869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.190181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.190187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.190525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.190532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.948 qpair failed and we were unable to recover it. 00:30:26.948 [2024-07-15 20:44:19.190856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.948 [2024-07-15 20:44:19.190862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.191006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.191012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.191398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.191406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.191747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.191754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.192101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.192109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.192444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.192450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.192654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.192662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.193003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.193009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.193227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.193235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.193590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.193596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.193805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.193811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.194172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.194179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.194521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.194528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.194853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.194859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.195055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.195061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.195334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.195342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.195660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.195667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.196033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.196040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.196197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.196205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.196444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.196451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.196666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.196675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.197044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.197051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.197360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.197367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.197569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.197581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.197893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.197899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.198097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.198103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.198437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.198444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.198792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.198799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.199147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.199154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.199357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.199364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.199588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.199595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.199773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.199779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.200090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.200097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.200411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.200418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.200774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.200781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.200985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.200992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.201340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.201348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.201701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.201707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.202046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.202052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.202383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.202390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.202596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.949 [2024-07-15 20:44:19.202602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.949 qpair failed and we were unable to recover it. 00:30:26.949 [2024-07-15 20:44:19.202998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.203005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.203189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.203196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.203419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.203426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.203757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.203764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.203954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.203962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.204149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.204156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.204393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.204399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.204802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.204808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.205146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.205152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.205505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.205512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.205853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.205860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.205903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.205908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.206223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.206232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.206413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.206420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.206757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.206764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.207172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.207180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.207539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.207546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.207867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.207874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.208198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.208205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.208538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.208544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.208870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.208877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.209204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.209210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.209453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.209462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.209833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.209840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.210134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.210142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.210481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.210489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.210831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.210837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.211033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.211039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.211231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.211238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.211587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.211594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.211917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.211924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.212124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.212130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.212478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.212485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.212566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.212572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.212921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.212928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.213257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.213264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.213477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.213484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.213825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.213832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.214014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.214023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.214360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.214367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.214744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.950 [2024-07-15 20:44:19.214751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.950 qpair failed and we were unable to recover it. 00:30:26.950 [2024-07-15 20:44:19.215090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.215096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.215286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.215293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.215606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.215612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.215861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.215867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.216243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.216249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.216644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.216651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.216999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.217006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.217196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.217204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.217594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.217601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.217929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.217936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.218266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.218273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.218594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.218601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.218973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.218980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.219189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.219197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.219531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.219538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.219590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.219596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.219783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.219790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.220144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.220151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.220532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.220539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.220864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.220870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.221196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.221202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.221430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.221438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.221787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.221794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.221837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.221843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.222175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.222182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.222535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.222542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.222886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.222892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.223213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.223219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.223582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.223589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.223921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.223928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.224256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.224263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.224475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.224482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.224798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.224804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.224993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.225000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.225180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.225187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.225523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.225530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.225877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.225883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.226198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.226205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.226421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.226427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.226802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.226809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.227023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.227030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.951 qpair failed and we were unable to recover it. 00:30:26.951 [2024-07-15 20:44:19.227389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.951 [2024-07-15 20:44:19.227396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.227780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.227787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.228109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.228116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.228324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.228331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.228642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.228648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.228998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.229004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.229159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.229171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.229347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.229354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.229715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.229722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.230072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.230078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.230285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.230293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.230647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.230654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.230987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.230993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.231322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.231329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.231672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.231678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.232009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.232015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.232234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.232242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.232286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.232293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.232631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.232637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.232865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.232871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.233181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.233188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.233342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.233348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.233678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.233685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.233902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.233910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.234280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.234287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.234539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.234546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.234874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.234880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.234929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.234934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.235283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.235289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.235482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.235488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.235785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.235791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.236150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.236157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.236480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.236487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.236685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.236692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.236894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.236901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.237249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.237256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.952 qpair failed and we were unable to recover it. 00:30:26.952 [2024-07-15 20:44:19.237440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.952 [2024-07-15 20:44:19.237447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.237766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.237773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.238096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.238103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.238302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.238311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.238446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.238452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.238497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.238504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.238925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.238931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.239257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.239264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.239476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.239483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.239821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.239827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.240176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.240183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.240463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.240470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.240839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.240846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.241088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.241095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.241326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.241333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.241501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.241508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.241620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.241627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.241863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.241871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.242083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.242090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.242419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.242425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.242686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.242693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.242903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.242910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.243146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.243152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.243243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.243249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.243669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.243677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.243886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.243893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.244096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.244104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.244238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.244246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.244543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.244550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.244896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.244902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.245238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.245245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.245561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.245568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.245926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.245932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.246003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.246009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.246336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.246343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.246713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.246720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.247090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.247096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.247278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.247285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.247711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.247718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.247917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.247925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.248263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.248270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.953 qpair failed and we were unable to recover it. 00:30:26.953 [2024-07-15 20:44:19.248494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.953 [2024-07-15 20:44:19.248500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.248681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.248687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.248918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.248925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.249124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.249130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.249374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.249381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.249607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.249614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.249815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.249822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.250219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.250226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.250569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.250575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.250905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.250911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.251166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.251173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.251377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.251384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.251439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.251445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.251624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.251632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.251993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.252000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.252326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.252333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.252669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.252675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.252999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.253006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.253193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.253199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.253517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.253524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.253714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.253722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.253889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.253896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.254119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.254127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.254296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.254305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.254608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.254614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.254905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.254912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.255247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.255254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.255612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.255618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.255982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.255988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.256317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.256324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.256667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.256673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.256922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.256929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.257258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.257266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.257462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.257469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.257786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.257792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.258118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.258125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.258480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.258487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.258843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.258849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.259072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.259079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.259378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.259385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.259721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.259727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.954 [2024-07-15 20:44:19.260100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.954 [2024-07-15 20:44:19.260107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.954 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.260403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.260411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.260753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.260760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.261112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.261118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.261319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.261327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.261703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.261711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.261758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.261764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.261962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.261969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.262309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.262316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.262532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.262538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.262915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.262922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.263268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.263275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.263465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.263472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.263770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.263776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.263966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.263974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.264274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.264281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.264503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.264510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.264878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.264885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.265206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.265213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.265567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.265574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.265905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.265911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.266282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.266288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.266554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.266562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.266900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.266907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.267110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.267116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.267466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.267472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.267835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.267841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.268171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.268177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.268241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.268247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.268591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.268597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.268797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.268805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.268850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.268857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.269179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.269185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.269534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.269541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.269735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.955 [2024-07-15 20:44:19.269742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.955 qpair failed and we were unable to recover it. 00:30:26.955 [2024-07-15 20:44:19.270085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.270091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.270420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.270427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.270589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.270596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.271007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.271014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.271343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.271349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.271686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.271692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.272011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.272018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.272341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.272347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.272556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.272563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.272924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.272931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.273283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.273290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.273480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.273487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.273661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.273667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.273979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.273985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.274336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.274343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.274724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.274731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.275066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.275072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.275403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.275410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.275567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.275574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.275946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.275953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.276281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.276288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.276614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.276621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.276909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.276916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.276973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.276980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.277159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.277165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.277336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.277343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.277578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.277586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.277792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.277801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.278150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.278156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.278533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.278540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.278869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.278875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.279282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.279294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.279424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.279431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.279755] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.279762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.279960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.279968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.280134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.280141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.280213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.280220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.280579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.280587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.281039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.281046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.281428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.281435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.281686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.956 [2024-07-15 20:44:19.281693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.956 qpair failed and we were unable to recover it. 00:30:26.956 [2024-07-15 20:44:19.282015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.957 [2024-07-15 20:44:19.282021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.957 qpair failed and we were unable to recover it. 00:30:26.957 [2024-07-15 20:44:19.282342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.957 [2024-07-15 20:44:19.282350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.957 qpair failed and we were unable to recover it. 00:30:26.957 [2024-07-15 20:44:19.282704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.957 [2024-07-15 20:44:19.282711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.957 qpair failed and we were unable to recover it. 00:30:26.957 [2024-07-15 20:44:19.282913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.957 [2024-07-15 20:44:19.282920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.957 qpair failed and we were unable to recover it. 00:30:26.957 [2024-07-15 20:44:19.283252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.957 [2024-07-15 20:44:19.283260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.957 qpair failed and we were unable to recover it. 00:30:26.957 [2024-07-15 20:44:19.283619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.957 [2024-07-15 20:44:19.283625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.957 qpair failed and we were unable to recover it. 00:30:26.957 [2024-07-15 20:44:19.283847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:26.957 [2024-07-15 20:44:19.283854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:26.957 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.284258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.284267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.284512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.284520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.284710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.284716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.284899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.284906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.285105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.285112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.285509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.285517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.285857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.285865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.286155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.286162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.286521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.286528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.286718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.286726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.286948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.286955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.287291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.287299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.287621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.287627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.287828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.287835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.288203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.288210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.288556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.288563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.288907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.288914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.232 [2024-07-15 20:44:19.289158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.232 [2024-07-15 20:44:19.289164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.232 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.289514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.289521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.289732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.289742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.290086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.290093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.290431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.290438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.290756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.290763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.290961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.290968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.291166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.291172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.291346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.291354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.291598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.291605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.291981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.291987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.292187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.292194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.292536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.292543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.292746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.292752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.293058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.293064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.293307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.293314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.293708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.293715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.294048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.294055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.294400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.294406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.294616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.294623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.294816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.294823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.295195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.295201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.295558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.295564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.295764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.295770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.295919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.295925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.296235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.296242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.296433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.296440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.296832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.296839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.297031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.297037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.297398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.297406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.297790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.297796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.298174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.298181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.298420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.298427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.298769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.298775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.299103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.299110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.299209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.299215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.299396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.299403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.299725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.299731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.300075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.300082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.300289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.233 [2024-07-15 20:44:19.300296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.233 qpair failed and we were unable to recover it. 00:30:27.233 [2024-07-15 20:44:19.300565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.300572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.300892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.300899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.301235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.301244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.301670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.301678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.301934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.301941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.302153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.302161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.302356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.302363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.302573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.302580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.302782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.302788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.302836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.302843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.303191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.303198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.303534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.303541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.303875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.303881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.304212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.304219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.304406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.304414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.304739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.304747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.304951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.304958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.305326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.305333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.305685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.305691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.305905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.305913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.306275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.306282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.306339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.306345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.306727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.306734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.307065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.307072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.307421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.307427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.307658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.307664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.307974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.307983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.308182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.308190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.308402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.308409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.308609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.308616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.308790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.308800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.309033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.309041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.309251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.309258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.309496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.309504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.309834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.309841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.310192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.310198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.310597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.310605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.310938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.310945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.311149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.311158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.234 [2024-07-15 20:44:19.311402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.234 [2024-07-15 20:44:19.311409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.234 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.311710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.311717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.312148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.312155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.312337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.312348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.312738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.312745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.313066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.313073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.313409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.313416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.313473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.313479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.313681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.313688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.314035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.314041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.314415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.314422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.314611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.314619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.314940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.314947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.315327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.315334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.315673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.315680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.315983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.315989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.316329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.316336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.316535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.316542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.316861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.316868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.317210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.317217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.317408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.317416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.317626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.317632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.317826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.317832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.318028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.318036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.318365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.318372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.318762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.318769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.319110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.319117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.319455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.319463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.319788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.319795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.319953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.319960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.320380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.320387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.320712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.320718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.321050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.321057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.321383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.321389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.321617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.321623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.321872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.321879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.322275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.322283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.322585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.322591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.322929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.322935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.323272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.235 [2024-07-15 20:44:19.323278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.235 qpair failed and we were unable to recover it. 00:30:27.235 [2024-07-15 20:44:19.323609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.323615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.323936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.323942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.324266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.324273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.324474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.324482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.324850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.324857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.325110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.325116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.325352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.325359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.325716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.325723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.326055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.326062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.326437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.326444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.326782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.326789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.327012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.327019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.327480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.327487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.327806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.327812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.328140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.328147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.328485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.328493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.328870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.328878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.328926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.328934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.329306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.329313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.329548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.329555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.329892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.329899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.330078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.330086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.330409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.330416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.330637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.330644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.330994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.331000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.331335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.331341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.331692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.331698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.331891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.331898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.332099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.332105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.332404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.332411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.332586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.332593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.332851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.332858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.236 [2024-07-15 20:44:19.333188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.236 [2024-07-15 20:44:19.333194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.236 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.333524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.333531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.333852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.333859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.333904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.333911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.334149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.334156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.334456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.334463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.334667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.334673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.334974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.334980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.335287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.335294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.335465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.335472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.335674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.335680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.335893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.335902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.336062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.336070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.336430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.336437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.336637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.336643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.336991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.336997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.337147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.337153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.337492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.337500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.337666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.337673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.338096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.338103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.338425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.338432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.338765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.338772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.338858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.338866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.339093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.339100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.339425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.339432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.339767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.339774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.339964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.339972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.340265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.340272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.340625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.340632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.340965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.340972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.341296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.341303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.341551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.341558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.341776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.341782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.342113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.342120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.237 [2024-07-15 20:44:19.342368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.237 [2024-07-15 20:44:19.342375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.237 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.342732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.342739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.343103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.343109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.343519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.343526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.343876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.343885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.344221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.344231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.344426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.344434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.344628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.344634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.344815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.344822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.345184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.345191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.345519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.345526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.345848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.345855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.346178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.346184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.346389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.346397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.346630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.346636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.347057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.347064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.347441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.347448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.347660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.347667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.348018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.348024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.348102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.348108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.348465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.348471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.348881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.348889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.349222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.349231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.349568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.349575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.349750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.349757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.349969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.349977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.350335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.350342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.350670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.350676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.351090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.351096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.351269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.351281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.351645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.351651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.351981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.351988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.352407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.352414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.352619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.352625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.352972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.352978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.353184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.353192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.353522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.353528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.353869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.353875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.354217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.354224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.354617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.354625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.354827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.238 [2024-07-15 20:44:19.354834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.238 qpair failed and we were unable to recover it. 00:30:27.238 [2024-07-15 20:44:19.355188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.355195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.355558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.355566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.355889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.355896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.356265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.356274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.356636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.356643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.356976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.356983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.357202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.357209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.357585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.357592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.357933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.357940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.358278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.358285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.358501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.358509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.358721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.358728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.358967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.358976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.359317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.359325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.359649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.359657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.359954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.359961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.360208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.360215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.360417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.360424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.360777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.360784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.361021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.361028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.361364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.361371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.361678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.361686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.362023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.362030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.362223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.362233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.362442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.362450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.362630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.362638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.362969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.362976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.363356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.363363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.363725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.363732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.364065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.364072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.364424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.364432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.364813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.364820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.364865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.364872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.365245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.365252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.239 qpair failed and we were unable to recover it. 00:30:27.239 [2024-07-15 20:44:19.365412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.239 [2024-07-15 20:44:19.365419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.365649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.365656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.365994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.366000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.366189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.366195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.366246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.366252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.366501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.366508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.366770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.366776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.367148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.367155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.367462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.367470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.367841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.367849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.368037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.368044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.368400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.368407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.368800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.368807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.369129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.369136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.369484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.369491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.369677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.369685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.369866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.369873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.370291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.370298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.370496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.370505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.370804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.370811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.371182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.371188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.371598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.371606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.371651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.371659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.371981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.371988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.372317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.372325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.372543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.372550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.372907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.372914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.373251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.373259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.373630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.373638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.373963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.373970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.374306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.374313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.374667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.374674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.374883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.374891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.375250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.375257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.375652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.375659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.375984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.375991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.376188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.376196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.376548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.376555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.376745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.376751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.377116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.377123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.240 qpair failed and we were unable to recover it. 00:30:27.240 [2024-07-15 20:44:19.377458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.240 [2024-07-15 20:44:19.377465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.377810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.377817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.378140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.378147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.378347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.378355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.378699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.378706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.379043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.379050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.379454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.379461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.379799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.379805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.380135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.380142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.380348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.380357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.380704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.380711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.380919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.380926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.381240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.381248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.381461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.381469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.381811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.381818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.381998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.382006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.382340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.382347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.382690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.382697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.383060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.383067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.383394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.383401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.383579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.383586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.383973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.383980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.384319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.384332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.384673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.384680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.384890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.384898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.385237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.385244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.385433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.385440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.385845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.385852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.241 [2024-07-15 20:44:19.386172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.241 [2024-07-15 20:44:19.386179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.241 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.386512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.386519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.386844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.386851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.387099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.387107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.387472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.387479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.387807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.387814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.388032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.388040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.388246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.388254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.388446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.388453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.388650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.388657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.389018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.389026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.389155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.389162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.389483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.389490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.389822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.389829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.390209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.390216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.390412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.390420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.390734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.390741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.391080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.391087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.391479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.391486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.391839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.391846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.392050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.392057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.392386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.392396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.392752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.392760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.392937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.392944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.393293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.393300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.393647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.393654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.394063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.394070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.394401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.394408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.394582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.394589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.395011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.395018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.395218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.395227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.395670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.395677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.395826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.395833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.395984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.395990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.396216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.396223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.396605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.396612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.396937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.396944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.397182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.397189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.397539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.397546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.397890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.242 [2024-07-15 20:44:19.397898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.242 qpair failed and we were unable to recover it. 00:30:27.242 [2024-07-15 20:44:19.398107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.398114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.398386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.398393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.398770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.398777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.399132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.399139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.399475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.399483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.399697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.399704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.400050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.400057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.400262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.400270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.400690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.400697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.401025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.401033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.401359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.401366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.401737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.401745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.402124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.402132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.402542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.402549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.402633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.402639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.402991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.402998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.403203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.403211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.403471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.403479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.403813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.403820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.404039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.404047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.404415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.404422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.404614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.404623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.404997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.405004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.405327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.405336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.405686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.405693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.406028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.406036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.406385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.406392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.406741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.406749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.406807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.406814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.406995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.407002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.407084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.407091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.407419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.407426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.407617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.407625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.407855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.407862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.408185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.408194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.408403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.408411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.408631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.408637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.409039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.409046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.409371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.409379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.409743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.243 [2024-07-15 20:44:19.409750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.243 qpair failed and we were unable to recover it. 00:30:27.243 [2024-07-15 20:44:19.410157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.410164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.410542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.410549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.410867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.410874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.411115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.411123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.411179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.411186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.411593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.411601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.411929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.411937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.412134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.412142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.412494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.412501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.412688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.412696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.413037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.413046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.413428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.413436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.413635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.413643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.414041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.414048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.414102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.414109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.414427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.414434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.414628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.414636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.414949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.414957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.415280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.415287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.415638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.415645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.415968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.415976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.416179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.416188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.416434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.416441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.416793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.416801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.416993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.417000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.417351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.417358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.417786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.417793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.418108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.418115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.418452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.418461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.418721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.418728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.418989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.418996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.419203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.419210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.419615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.419622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.419936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.419943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.420324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.420331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.420500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.244 [2024-07-15 20:44:19.420506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.244 qpair failed and we were unable to recover it. 00:30:27.244 [2024-07-15 20:44:19.420709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.420716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.421072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.421079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.421289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.421297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.421670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.421676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.422003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.422010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.422467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.422474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.422842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.422848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.423176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.423183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.423522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.423529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.423783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.423791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.423969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.423976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.424262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.424269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.424484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.424491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.424901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.424910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.425146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.425154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.425482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.425489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.425635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.425643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.426010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.426017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.426216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.426224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.426406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.426413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.426807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.426814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.427139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.427147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.427482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.427489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.427813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.427820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.428022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.428029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.428375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.428384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.428724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.428730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.429062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.429068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.429445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.429452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.429642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.429650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.429867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.429874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.430211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.430217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.430584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.430591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.430792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.430798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.431188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.431194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.431251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.431258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.431453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.431460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.431801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.431807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.432153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.432160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.432372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.432379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.245 [2024-07-15 20:44:19.432772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.245 [2024-07-15 20:44:19.432779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.245 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.433016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.433022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.433356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.433363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.433693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.433700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.434039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.434045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.434373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.434379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.434547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.434554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.434968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.434974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.435215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.435222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.435641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.435648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.435987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.435994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.436331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.436338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.436404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.436410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.436727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.436733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.437097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.437104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.437442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.437449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.437785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.437791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.438001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.438009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.438376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.438383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.438728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.438734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.438933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.438940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.439287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.439293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.439475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.439482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.439829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.439836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.440169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.440176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.440560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.440569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.440913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.440919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.441332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.441339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.441725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.441731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.442059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.442066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.442254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.442262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.442572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.442579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.442762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.442769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.443072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.443078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.443260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.246 [2024-07-15 20:44:19.443267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.246 qpair failed and we were unable to recover it. 00:30:27.246 [2024-07-15 20:44:19.443582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.443588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.443813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.443819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.444183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.444189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.444506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.444513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.444865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.444872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.445072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.445079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.445444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.445451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.445633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.445641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.445808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.445815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.445986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.445993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.446354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.446361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.446699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.446705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.447035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.447041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.447367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.447374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.447721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.447727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.447917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.447924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.448297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.448305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.448615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.448622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.448952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.448958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.449281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.449288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.449679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.449685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.450008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.450014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.450226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.450236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.450563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.450570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.450911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.450918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.451263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.451271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.451588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.451595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.451966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.451973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.452161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.452170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.452280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:27.247 [2024-07-15 20:44:19.452287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.452507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.452513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:30:27.247 [2024-07-15 20:44:19.452741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.452748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:27.247 [2024-07-15 20:44:19.453101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.453108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:27.247 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.247 [2024-07-15 20:44:19.453436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.453444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.453704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.453711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.454069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.454076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.454284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.454291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.454555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.247 [2024-07-15 20:44:19.454562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.247 qpair failed and we were unable to recover it. 00:30:27.247 [2024-07-15 20:44:19.454910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.454918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.455261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.455269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.455315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.455323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.455649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.455655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.455833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.455842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.456147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.456154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.456459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.456467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.456803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.456809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.457058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.457065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.457282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.457290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.457722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.457730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.457882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.457890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.458247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.458254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.458460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.458468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.458803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.458810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.458983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.458990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.459318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.459325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.459734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.459741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.460033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.460039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.460372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.460378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.460715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.460722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.461022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.461029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.461384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.461392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.461474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.461482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.461801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.461809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.462158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.462165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.462552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.462559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.462857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.462864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.463239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.463246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.463581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.463588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.463910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.463918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.464132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.464140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.464311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.464319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.464560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.464568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.464797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.464804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.465147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.465154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.465417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.465425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.465638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.248 [2024-07-15 20:44:19.465645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.248 qpair failed and we were unable to recover it. 00:30:27.248 [2024-07-15 20:44:19.465919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.465926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.466257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.466264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.466466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.466475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.466700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.466706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.466955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.466962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.467281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.467289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.467636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.467642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.467965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.467973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.468292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.468299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.468645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.468653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.468982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.468989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.469317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.469325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.469662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.469669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.470006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.470013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.470393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.470401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.470752] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.470760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.470974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.470981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.471298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.471305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.471500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.471507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.471880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.471889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.472216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.472223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.472423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.472431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.472754] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.472762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.473084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.473091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.473427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.473434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.473751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.473758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.474073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.474080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.474405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.474412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.474761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.474768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.475086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.475093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.475426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.475434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.475748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.475756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.476048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.476056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.476403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.476410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.476632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.476638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.476809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.476816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.477152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.477159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.477496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.477503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.477869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.477875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.477962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.249 [2024-07-15 20:44:19.477968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.249 qpair failed and we were unable to recover it. 00:30:27.249 [2024-07-15 20:44:19.478280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.478287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.478615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.478622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.478831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.478838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.479146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.479154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.479532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.479540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.479868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.479875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.480208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.480215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.480431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.480439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.480786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.480793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.481132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.481138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.481472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.481480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.481898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.481905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.482227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.482237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.482593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.482601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.482942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.482949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.483188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.483194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.483543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.483551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.483753] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.483761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.484089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.484096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.484417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.484426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.484834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.484840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.485161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.485168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.485363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.485372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.485722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.485728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.485916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.485924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.486141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.486148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.486485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.486492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.486796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.486804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.487003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.487011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.487242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.487249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.487629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.487636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.488043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.488050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.488369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.488376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.488700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.488707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.250 qpair failed and we were unable to recover it. 00:30:27.250 [2024-07-15 20:44:19.488933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.250 [2024-07-15 20:44:19.488941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.489333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.489341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.489651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.489659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.490000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.490007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.490330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.490338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.490518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.490524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.490921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.490928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.491262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.491269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.491438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.491445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.491837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.491845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.492044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.492050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:27.251 [2024-07-15 20:44:19.492419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.492428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.251 [2024-07-15 20:44:19.492777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.492785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.251 [2024-07-15 20:44:19.493107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.493114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.493456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.493463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.493672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.493679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.493888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.493895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.494117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.494123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.494432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.494439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.494628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.494637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.494797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.494804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.494968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.494975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.495284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.495291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.495630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.495640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.495929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.495937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.496298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.496306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.496647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.496654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.496897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.496904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.497065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.497072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.497481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.497488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.497743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.497750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.497999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.498007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.498366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.498374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.498577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.498586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.498935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.498942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.499280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.499287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.499631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.499638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.251 [2024-07-15 20:44:19.500028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.251 [2024-07-15 20:44:19.500035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.251 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.500400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.500408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.500461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.500467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.500792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.500799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.500989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.500997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.501257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.501264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.501604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.501612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.501943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.501949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.502277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.502284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.502475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.502482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.502715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.502722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.503045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.503052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.503466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.503474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.503809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.503817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.504025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.504033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.504393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.504399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.504734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.504741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.505156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.505162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.505358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.505367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.505556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.505563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.505809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.505816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.506013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.506021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.506374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.506380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.506582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.506589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.506985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.506992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 Malloc0 00:30:27.252 [2024-07-15 20:44:19.507362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.507369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.507699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.507708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.252 [2024-07-15 20:44:19.508048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.508055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.508378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.508385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:27.252 [2024-07-15 20:44:19.508700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.252 [2024-07-15 20:44:19.508708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.508918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.508925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.252 [2024-07-15 20:44:19.509288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.509296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.509636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.509643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.509889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.509895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.510237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.510244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.510641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.510647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.510978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.510984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.511167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.511174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.252 [2024-07-15 20:44:19.511501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.252 [2024-07-15 20:44:19.511508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.252 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.511836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.511843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.512157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.512164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.512385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.512392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.512605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.512611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.512969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.512975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.513271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.513278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.513589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.513595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.513654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.513660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.513976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.513982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.514140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.514156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.514498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.514505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.514690] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:27.253 [2024-07-15 20:44:19.514885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.514891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.515109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.515116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.515465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.515472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.515736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.515742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.515997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.516004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.516340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.516347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.516703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.516709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.517030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.517037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.517349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.517356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.517699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.517705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.517916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.517925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.518169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.518176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.518541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.518548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.518884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.518891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.519092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.519099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.519446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.519453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.519831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.519837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.520233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.520240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.520558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.520565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.520911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.520918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.521323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.521330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.521636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.521643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.521848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.521855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.522145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.522151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.522357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.522364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.522673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.522679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.522899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.522905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.523279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.253 [2024-07-15 20:44:19.523285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.253 qpair failed and we were unable to recover it. 00:30:27.253 [2024-07-15 20:44:19.523487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.523494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.254 [2024-07-15 20:44:19.523879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.523886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:27.254 [2024-07-15 20:44:19.524210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.524217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.254 [2024-07-15 20:44:19.524425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.524432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.524634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.524642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.254 [2024-07-15 20:44:19.524952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.524958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.525347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.525354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.525569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.525576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.525781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.525788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.526128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.526134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.526481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.526488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.526838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.526847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.527236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.527243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.527471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.527477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.527838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.527844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.528041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.528049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.528465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.528471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.528809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.528816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.529011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.529018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.529070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.529077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.529410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.529416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.529782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.529789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.530123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.530130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.530321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.530329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.530644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.530650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.530994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.531000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.531323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.531330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.531528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.531536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.531720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.531726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.532069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.532075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.532411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.532418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.532745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.532751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.532942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.532949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.533159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.533166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.533334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.533342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.254 qpair failed and we were unable to recover it. 00:30:27.254 [2024-07-15 20:44:19.533664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.254 [2024-07-15 20:44:19.533671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.533889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.533895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.534270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.534277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.534515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.534522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.534699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.534705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.534929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.534935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.535192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.535199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.535254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.535261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.535454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.535461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.255 [2024-07-15 20:44:19.535779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.535786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:27.255 [2024-07-15 20:44:19.536120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.536127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.255 [2024-07-15 20:44:19.536481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.536488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.255 [2024-07-15 20:44:19.536724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.536731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.537044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.537050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.537347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.537354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.537738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.537744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.537981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.537988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.538362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.538369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.538803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.538809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.539131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.539138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.539186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.539192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.539560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.539567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.539748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.539756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.540056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.540063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.540215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.540221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.540472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.540479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.540819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.540825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.541143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.541150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.541310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.541317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.541729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.541736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.542059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.542065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.542519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.542525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.542879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.542887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.542951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.542958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.543273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.543280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.255 [2024-07-15 20:44:19.543455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.255 [2024-07-15 20:44:19.543461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.255 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.543779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.543786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.544117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.544123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.544420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.544426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.544766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.544773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.544962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.544970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.545141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.545149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.545537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.545544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.545807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.545814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.546157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.546164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.546591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.546598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.546800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.546807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.547216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.547223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.547611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.547618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.547821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.547828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.256 [2024-07-15 20:44:19.548030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.548036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.256 [2024-07-15 20:44:19.548397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.548404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.256 [2024-07-15 20:44:19.548647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.548654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.256 [2024-07-15 20:44:19.548949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.548956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.549284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.549291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.549679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.549686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.550018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.550025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.550460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.550467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.550641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.550648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.550973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.550979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.551182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.551190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.551531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.551538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.551710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.551718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.552075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.552082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.552283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.552291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.552544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.552550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.552875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.552883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.553213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.553220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.553388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.553395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.553573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.553580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.553810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.553817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.554024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.554031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.554221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.554231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.554564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.256 [2024-07-15 20:44:19.554571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.256 qpair failed and we were unable to recover it. 00:30:27.256 [2024-07-15 20:44:19.554903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:27.257 [2024-07-15 20:44:19.554910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f5af8000b90 with addr=10.0.0.2, port=4420 00:30:27.257 qpair failed and we were unable to recover it. 00:30:27.257 [2024-07-15 20:44:19.554966] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.257 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.257 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.257 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:27.257 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:27.257 [2024-07-15 20:44:19.565511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.257 [2024-07-15 20:44:19.565584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.257 [2024-07-15 20:44:19.565599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.257 [2024-07-15 20:44:19.565605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.257 [2024-07-15 20:44:19.565610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.257 [2024-07-15 20:44:19.565629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.257 qpair failed and we were unable to recover it. 00:30:27.257 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:27.257 20:44:19 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1539109 00:30:27.257 [2024-07-15 20:44:19.575497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.257 [2024-07-15 20:44:19.575554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.257 [2024-07-15 20:44:19.575566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.257 [2024-07-15 20:44:19.575571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.257 [2024-07-15 20:44:19.575575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.257 [2024-07-15 20:44:19.575585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.257 qpair failed and we were unable to recover it. 00:30:27.257 [2024-07-15 20:44:19.585510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.257 [2024-07-15 20:44:19.585620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.257 [2024-07-15 20:44:19.585633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.257 [2024-07-15 20:44:19.585638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.257 [2024-07-15 20:44:19.585642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.257 [2024-07-15 20:44:19.585654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.257 qpair failed and we were unable to recover it. 00:30:27.257 [2024-07-15 20:44:19.595495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.257 [2024-07-15 20:44:19.595582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.257 [2024-07-15 20:44:19.595594] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.257 [2024-07-15 20:44:19.595599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.257 [2024-07-15 20:44:19.595603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.257 [2024-07-15 20:44:19.595614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.257 qpair failed and we were unable to recover it. 00:30:27.520 [2024-07-15 20:44:19.605495] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.520 [2024-07-15 20:44:19.605561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.520 [2024-07-15 20:44:19.605572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.520 [2024-07-15 20:44:19.605577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.520 [2024-07-15 20:44:19.605581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.520 [2024-07-15 20:44:19.605591] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-07-15 20:44:19.615523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.520 [2024-07-15 20:44:19.615581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.520 [2024-07-15 20:44:19.615593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.520 [2024-07-15 20:44:19.615598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.520 [2024-07-15 20:44:19.615602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.520 [2024-07-15 20:44:19.615612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-07-15 20:44:19.625564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.520 [2024-07-15 20:44:19.625622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.520 [2024-07-15 20:44:19.625633] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.520 [2024-07-15 20:44:19.625638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.520 [2024-07-15 20:44:19.625642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.520 [2024-07-15 20:44:19.625652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-07-15 20:44:19.635539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.520 [2024-07-15 20:44:19.635595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.520 [2024-07-15 20:44:19.635606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.520 [2024-07-15 20:44:19.635611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.520 [2024-07-15 20:44:19.635615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.520 [2024-07-15 20:44:19.635625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-07-15 20:44:19.645534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.520 [2024-07-15 20:44:19.645593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.520 [2024-07-15 20:44:19.645605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.520 [2024-07-15 20:44:19.645610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.520 [2024-07-15 20:44:19.645614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.520 [2024-07-15 20:44:19.645624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-07-15 20:44:19.655571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.520 [2024-07-15 20:44:19.655625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.520 [2024-07-15 20:44:19.655636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.520 [2024-07-15 20:44:19.655643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.520 [2024-07-15 20:44:19.655648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.520 [2024-07-15 20:44:19.655658] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.520 qpair failed and we were unable to recover it. 00:30:27.520 [2024-07-15 20:44:19.665589] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.520 [2024-07-15 20:44:19.665647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.520 [2024-07-15 20:44:19.665659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.520 [2024-07-15 20:44:19.665664] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.520 [2024-07-15 20:44:19.665668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.665678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.675655] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.675713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.675724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.675728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.675733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.675742] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.685602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.685659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.685671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.685676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.685680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.685690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.695728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.695778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.695790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.695794] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.695798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.695809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.705777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.705836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.705847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.705852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.705856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.705866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.715737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.715791] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.715802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.715807] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.715811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.715821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.725804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.725864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.725875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.725879] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.725883] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.725893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.735825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.735918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.735930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.735935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.735939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.735949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.745853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.745914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.745935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.745941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.745946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.745960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.755874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.755929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.755941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.755947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.755952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.755963] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.765922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.765983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.766002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.766008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.766013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.766027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.775825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.775883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.775901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.775908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.775912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.775926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.785842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.785894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.785907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.785912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.785916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.785930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.796035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.796092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.796104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.796109] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.796113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.521 [2024-07-15 20:44:19.796123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.521 qpair failed and we were unable to recover it. 00:30:27.521 [2024-07-15 20:44:19.806046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.521 [2024-07-15 20:44:19.806109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.521 [2024-07-15 20:44:19.806120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.521 [2024-07-15 20:44:19.806125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.521 [2024-07-15 20:44:19.806129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.806139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.816031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.816086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.816098] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.816103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.816107] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.816117] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.825967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.826015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.826027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.826031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.826035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.826045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.836103] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.836199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.836213] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.836218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.836222] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.836236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.846151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.846210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.846222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.846227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.846236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.846246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.856296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.856361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.856372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.856377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.856381] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.856391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.866278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.866356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.866367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.866372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.866376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.866386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.876264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.876318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.876329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.876334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.876338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.876352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.886292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.886360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.886370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.886375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.886380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.886390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.522 [2024-07-15 20:44:19.896270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.522 [2024-07-15 20:44:19.896324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.522 [2024-07-15 20:44:19.896334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.522 [2024-07-15 20:44:19.896339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.522 [2024-07-15 20:44:19.896344] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.522 [2024-07-15 20:44:19.896354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.522 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.906288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.906342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.906353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.906358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.906363] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.906372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.916376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.916448] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.916459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.916464] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.916468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.916478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.926345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.926412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.926423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.926428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.926432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.926442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.936375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.936427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.936439] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.936444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.936448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.936458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.946394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.946458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.946470] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.946474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.946479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.946489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.956390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.956446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.956457] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.956462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.956466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.956476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.966480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.966540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.966551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.966556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.966563] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.966573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.976490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.976547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.976558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.976563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.976567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.976577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.986486] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.986549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.986560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.986565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.986569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.986579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:19.996545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.785 [2024-07-15 20:44:19.996600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.785 [2024-07-15 20:44:19.996611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.785 [2024-07-15 20:44:19.996616] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.785 [2024-07-15 20:44:19.996620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.785 [2024-07-15 20:44:19.996629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.785 qpair failed and we were unable to recover it. 00:30:27.785 [2024-07-15 20:44:20.006582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.006644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.006657] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.006662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.006666] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.006677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.016529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.016591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.016605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.016610] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.016615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.016627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.026634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.026722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.026733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.026739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.026743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.026754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.036675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.036729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.036740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.036745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.036750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.036760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.046599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.046659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.046670] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.046675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.046679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.046690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.056728] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.056785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.056796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.056804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.056808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.056818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.066852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.066975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.067008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.067060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.067084] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.067115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.076769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.076827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.076839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.076844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.076848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.076859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.086796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.086857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.086868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.086873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.086878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.086887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.096826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.096917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.096928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.096933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.096937] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.096947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.106856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.106913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.106925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.106930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.106934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.106945] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.116940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.117004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.117015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.117020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.117025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.117035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.126911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.126971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.126983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.126988] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.126992] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.127005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.136940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.136994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.786 [2024-07-15 20:44:20.137007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.786 [2024-07-15 20:44:20.137011] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.786 [2024-07-15 20:44:20.137016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.786 [2024-07-15 20:44:20.137027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.786 qpair failed and we were unable to recover it. 00:30:27.786 [2024-07-15 20:44:20.146957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.786 [2024-07-15 20:44:20.147008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.787 [2024-07-15 20:44:20.147022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.787 [2024-07-15 20:44:20.147027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.787 [2024-07-15 20:44:20.147031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.787 [2024-07-15 20:44:20.147042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.787 qpair failed and we were unable to recover it. 00:30:27.787 [2024-07-15 20:44:20.157004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.787 [2024-07-15 20:44:20.157056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.787 [2024-07-15 20:44:20.157067] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.787 [2024-07-15 20:44:20.157072] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.787 [2024-07-15 20:44:20.157076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:27.787 [2024-07-15 20:44:20.157086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:27.787 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.167016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.167072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.167084] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.167089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.167093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.167103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.177040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.177097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.177108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.177113] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.177117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.177127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.187003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.187059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.187070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.187075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.187079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.187089] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.197031] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.197093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.197105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.197110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.197114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.197124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.207135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.207205] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.207216] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.207221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.207225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.207239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.217178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.217232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.217243] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.217248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.217252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.217262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.227185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.227244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.227256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.227261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.227265] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.227278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.237233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.237288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.237302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.237307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.237311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.237322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.247249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.247310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.049 [2024-07-15 20:44:20.247321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.049 [2024-07-15 20:44:20.247326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.049 [2024-07-15 20:44:20.247330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.049 [2024-07-15 20:44:20.247340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.049 qpair failed and we were unable to recover it. 00:30:28.049 [2024-07-15 20:44:20.257284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.049 [2024-07-15 20:44:20.257339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.257350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.257355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.257359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.257369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.267326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.267382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.267393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.267398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.267402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.267412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.277384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.277440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.277451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.277455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.277460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.277472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.287344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.287402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.287413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.287418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.287422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.287432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.297383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.297435] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.297446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.297451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.297455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.297465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.307294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.307346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.307358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.307362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.307366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.307376] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.317466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.317527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.317538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.317542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.317547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.317556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.327464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.327571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.327585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.327590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.327594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.327604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.337507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.337559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.337570] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.337574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.337579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.337588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.347568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.347692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.347703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.347707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.347712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.347722] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.357582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.357645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.357656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.357661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.357665] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.357675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.367581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.367645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.367656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.367661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.367671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.367680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.377611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.377667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.377677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.377682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.377686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.377696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.387634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.387691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.387702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.387707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.387711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.050 [2024-07-15 20:44:20.387721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.050 qpair failed and we were unable to recover it. 00:30:28.050 [2024-07-15 20:44:20.397678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.050 [2024-07-15 20:44:20.397736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.050 [2024-07-15 20:44:20.397746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.050 [2024-07-15 20:44:20.397751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.050 [2024-07-15 20:44:20.397755] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.051 [2024-07-15 20:44:20.397765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.051 qpair failed and we were unable to recover it. 00:30:28.051 [2024-07-15 20:44:20.407614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.051 [2024-07-15 20:44:20.407762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.051 [2024-07-15 20:44:20.407774] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.051 [2024-07-15 20:44:20.407779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.051 [2024-07-15 20:44:20.407783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.051 [2024-07-15 20:44:20.407793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.051 qpair failed and we were unable to recover it. 00:30:28.051 [2024-07-15 20:44:20.417731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.051 [2024-07-15 20:44:20.417789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.051 [2024-07-15 20:44:20.417800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.051 [2024-07-15 20:44:20.417805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.051 [2024-07-15 20:44:20.417809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.051 [2024-07-15 20:44:20.417819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.051 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.427654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.312 [2024-07-15 20:44:20.427713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.312 [2024-07-15 20:44:20.427725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.312 [2024-07-15 20:44:20.427731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.312 [2024-07-15 20:44:20.427736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.312 [2024-07-15 20:44:20.427747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.312 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.437794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.312 [2024-07-15 20:44:20.437848] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.312 [2024-07-15 20:44:20.437860] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.312 [2024-07-15 20:44:20.437867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.312 [2024-07-15 20:44:20.437872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.312 [2024-07-15 20:44:20.437883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.312 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.447723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.312 [2024-07-15 20:44:20.447789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.312 [2024-07-15 20:44:20.447800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.312 [2024-07-15 20:44:20.447805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.312 [2024-07-15 20:44:20.447809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.312 [2024-07-15 20:44:20.447818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.312 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.457859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.312 [2024-07-15 20:44:20.457910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.312 [2024-07-15 20:44:20.457920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.312 [2024-07-15 20:44:20.457928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.312 [2024-07-15 20:44:20.457932] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.312 [2024-07-15 20:44:20.457942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.312 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.467862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.312 [2024-07-15 20:44:20.467917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.312 [2024-07-15 20:44:20.467935] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.312 [2024-07-15 20:44:20.467941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.312 [2024-07-15 20:44:20.467946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.312 [2024-07-15 20:44:20.467960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.312 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.477912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.312 [2024-07-15 20:44:20.477973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.312 [2024-07-15 20:44:20.477991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.312 [2024-07-15 20:44:20.477996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.312 [2024-07-15 20:44:20.478001] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.312 [2024-07-15 20:44:20.478015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.312 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.487974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.312 [2024-07-15 20:44:20.488059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.312 [2024-07-15 20:44:20.488072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.312 [2024-07-15 20:44:20.488077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.312 [2024-07-15 20:44:20.488081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.312 [2024-07-15 20:44:20.488092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.312 qpair failed and we were unable to recover it. 00:30:28.312 [2024-07-15 20:44:20.497989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.498047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.498059] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.498064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.498068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.498079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.507974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.508029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.508040] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.508045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.508050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.508060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.518029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.518085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.518096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.518101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.518105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.518115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.528037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.528099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.528110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.528116] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.528120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.528130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.538113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.538192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.538203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.538207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.538211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.538221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.548070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.548121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.548131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.548139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.548143] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.548153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.558174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.558237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.558249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.558253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.558257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.558267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.568153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.568212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.568223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.568228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.568235] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.568245] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.578181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.578235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.578246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.578251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.578255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.578265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.588095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.588149] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.588160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.588165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.588169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.588179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.598253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.598309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.598320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.598324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.598328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.598338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.608273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.608335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.608346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.608351] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.608355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.608365] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.618223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.618280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.618291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.618295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.618300] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.618310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.628321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.628411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.628422] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.628427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.628431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.628441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.638372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.638425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.638438] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.638443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.638447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.638457] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.648375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.648432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.648443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.648448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.648452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.648462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.658436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.658491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.658502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.658507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.658511] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.658521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.668323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.668386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.668397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.668401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.668406] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.668416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.678544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.678598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.678609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.678614] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.678618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.678631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.313 [2024-07-15 20:44:20.688520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.313 [2024-07-15 20:44:20.688620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.313 [2024-07-15 20:44:20.688632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.313 [2024-07-15 20:44:20.688637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.313 [2024-07-15 20:44:20.688641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.313 [2024-07-15 20:44:20.688651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.313 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.698534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.698586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.574 [2024-07-15 20:44:20.698598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.574 [2024-07-15 20:44:20.698603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.574 [2024-07-15 20:44:20.698607] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.574 [2024-07-15 20:44:20.698617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.574 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.708527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.708593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.574 [2024-07-15 20:44:20.708604] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.574 [2024-07-15 20:44:20.708609] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.574 [2024-07-15 20:44:20.708614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.574 [2024-07-15 20:44:20.708623] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.574 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.718596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.718649] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.574 [2024-07-15 20:44:20.718660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.574 [2024-07-15 20:44:20.718665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.574 [2024-07-15 20:44:20.718670] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.574 [2024-07-15 20:44:20.718679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.574 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.728605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.728660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.574 [2024-07-15 20:44:20.728674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.574 [2024-07-15 20:44:20.728679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.574 [2024-07-15 20:44:20.728683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.574 [2024-07-15 20:44:20.728693] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.574 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.738604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.738671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.574 [2024-07-15 20:44:20.738682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.574 [2024-07-15 20:44:20.738687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.574 [2024-07-15 20:44:20.738691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.574 [2024-07-15 20:44:20.738701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.574 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.748647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.748698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.574 [2024-07-15 20:44:20.748709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.574 [2024-07-15 20:44:20.748714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.574 [2024-07-15 20:44:20.748719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.574 [2024-07-15 20:44:20.748729] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.574 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.758585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.758637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.574 [2024-07-15 20:44:20.758649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.574 [2024-07-15 20:44:20.758654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.574 [2024-07-15 20:44:20.758658] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.574 [2024-07-15 20:44:20.758669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.574 qpair failed and we were unable to recover it. 00:30:28.574 [2024-07-15 20:44:20.768741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.574 [2024-07-15 20:44:20.768802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.768813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.768818] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.768826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.768836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.778723] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.778777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.778788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.778793] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.778798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.778808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.788782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.788834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.788845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.788850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.788855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.788865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.798803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.798861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.798872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.798877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.798881] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.798891] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.808738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.808837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.808849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.808854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.808858] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.808869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.818762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.818823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.818835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.818840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.818845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.818855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.828744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.828795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.828806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.828812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.828816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.828826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.838914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.839006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.839017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.839022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.839026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.839036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.848950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.849018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.849029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.849034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.849038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.849049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.858980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.859033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.859044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.859052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.859056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.859067] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.868996] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.869050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.869061] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.869066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.869070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.869080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.879036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.879088] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.879099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.879103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.879108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.879118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.575 [2024-07-15 20:44:20.889064] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.575 [2024-07-15 20:44:20.889130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.575 [2024-07-15 20:44:20.889140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.575 [2024-07-15 20:44:20.889146] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.575 [2024-07-15 20:44:20.889150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.575 [2024-07-15 20:44:20.889160] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.575 qpair failed and we were unable to recover it. 00:30:28.576 [2024-07-15 20:44:20.899076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.576 [2024-07-15 20:44:20.899128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.576 [2024-07-15 20:44:20.899139] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.576 [2024-07-15 20:44:20.899144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.576 [2024-07-15 20:44:20.899149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.576 [2024-07-15 20:44:20.899159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.576 qpair failed and we were unable to recover it. 00:30:28.576 [2024-07-15 20:44:20.909109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.576 [2024-07-15 20:44:20.909162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.576 [2024-07-15 20:44:20.909174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.576 [2024-07-15 20:44:20.909179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.576 [2024-07-15 20:44:20.909183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.576 [2024-07-15 20:44:20.909193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.576 qpair failed and we were unable to recover it. 00:30:28.576 [2024-07-15 20:44:20.919134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.576 [2024-07-15 20:44:20.919189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.576 [2024-07-15 20:44:20.919200] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.576 [2024-07-15 20:44:20.919205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.576 [2024-07-15 20:44:20.919210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.576 [2024-07-15 20:44:20.919220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.576 qpair failed and we were unable to recover it. 00:30:28.576 [2024-07-15 20:44:20.929172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.576 [2024-07-15 20:44:20.929251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.576 [2024-07-15 20:44:20.929262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.576 [2024-07-15 20:44:20.929268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.576 [2024-07-15 20:44:20.929273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.576 [2024-07-15 20:44:20.929283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.576 qpair failed and we were unable to recover it. 00:30:28.576 [2024-07-15 20:44:20.939191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.576 [2024-07-15 20:44:20.939249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.576 [2024-07-15 20:44:20.939260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.576 [2024-07-15 20:44:20.939266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.576 [2024-07-15 20:44:20.939270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.576 [2024-07-15 20:44:20.939280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.576 qpair failed and we were unable to recover it. 00:30:28.576 [2024-07-15 20:44:20.949091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.576 [2024-07-15 20:44:20.949161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.576 [2024-07-15 20:44:20.949172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.576 [2024-07-15 20:44:20.949180] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.576 [2024-07-15 20:44:20.949184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.576 [2024-07-15 20:44:20.949195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.576 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:20.959243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:20.959301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:20.959313] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:20.959318] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:20.959323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:20.959333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:20.969320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:20.969384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:20.969395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:20.969400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:20.969404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:20.969415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:20.979303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:20.979406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:20.979417] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:20.979423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:20.979427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:20.979439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:20.989371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:20.989454] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:20.989465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:20.989470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:20.989475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:20.989485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:20.999256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:20.999317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:20.999328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:20.999333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:20.999337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:20.999347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:21.009412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:21.009471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:21.009482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:21.009487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:21.009492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:21.009502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:21.019450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:21.019501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:21.019512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:21.019517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:21.019522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:21.019532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:21.029437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:21.029497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:21.029508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:21.029513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:21.029517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:21.029528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:21.039492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:21.039573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:21.039587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:21.039592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:21.039596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:21.039606] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:21.049550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:21.049608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:21.049619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:21.049624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:21.049628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:21.049638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:21.059550] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:21.059609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.837 [2024-07-15 20:44:21.059620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.837 [2024-07-15 20:44:21.059625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.837 [2024-07-15 20:44:21.059629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.837 [2024-07-15 20:44:21.059639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.837 qpair failed and we were unable to recover it. 00:30:28.837 [2024-07-15 20:44:21.069545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.837 [2024-07-15 20:44:21.069620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.069630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.069635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.069640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.069650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.079676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.079737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.079748] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.079753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.079757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.079770] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.089623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.089681] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.089692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.089698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.089702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.089712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.099681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.099757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.099768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.099773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.099778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.099788] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.109578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.109634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.109645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.109650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.109655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.109666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.119707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.119795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.119806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.119812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.119816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.119826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.129734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.129794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.129811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.129816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.129820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.129831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.139743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.139799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.139810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.139815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.139820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.139829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.149799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.149850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.149861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.149866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.149870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.149881] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.159823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.159881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.159892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.159897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.159901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.159911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.169734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.169798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.169810] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.169815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.838 [2024-07-15 20:44:21.169822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.838 [2024-07-15 20:44:21.169832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.838 qpair failed and we were unable to recover it. 00:30:28.838 [2024-07-15 20:44:21.179873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.838 [2024-07-15 20:44:21.179927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.838 [2024-07-15 20:44:21.179938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.838 [2024-07-15 20:44:21.179944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.839 [2024-07-15 20:44:21.179948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.839 [2024-07-15 20:44:21.179958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.839 qpair failed and we were unable to recover it. 00:30:28.839 [2024-07-15 20:44:21.189873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.839 [2024-07-15 20:44:21.189940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.839 [2024-07-15 20:44:21.189951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.839 [2024-07-15 20:44:21.189956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.839 [2024-07-15 20:44:21.189960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.839 [2024-07-15 20:44:21.189971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.839 qpair failed and we were unable to recover it. 00:30:28.839 [2024-07-15 20:44:21.199924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.839 [2024-07-15 20:44:21.199980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.839 [2024-07-15 20:44:21.199991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.839 [2024-07-15 20:44:21.199996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.839 [2024-07-15 20:44:21.200000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.839 [2024-07-15 20:44:21.200010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.839 qpair failed and we were unable to recover it. 00:30:28.839 [2024-07-15 20:44:21.209961] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.839 [2024-07-15 20:44:21.210015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.839 [2024-07-15 20:44:21.210027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.839 [2024-07-15 20:44:21.210032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.839 [2024-07-15 20:44:21.210036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:28.839 [2024-07-15 20:44:21.210047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:28.839 qpair failed and we were unable to recover it. 00:30:29.100 [2024-07-15 20:44:21.219972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.100 [2024-07-15 20:44:21.220042] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.100 [2024-07-15 20:44:21.220054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.100 [2024-07-15 20:44:21.220059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.100 [2024-07-15 20:44:21.220063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.100 [2024-07-15 20:44:21.220074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-07-15 20:44:21.230002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.100 [2024-07-15 20:44:21.230061] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.100 [2024-07-15 20:44:21.230072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.100 [2024-07-15 20:44:21.230077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.100 [2024-07-15 20:44:21.230082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.100 [2024-07-15 20:44:21.230092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-07-15 20:44:21.240043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.100 [2024-07-15 20:44:21.240099] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.100 [2024-07-15 20:44:21.240110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.100 [2024-07-15 20:44:21.240115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.100 [2024-07-15 20:44:21.240120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.100 [2024-07-15 20:44:21.240130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-07-15 20:44:21.249927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.100 [2024-07-15 20:44:21.249985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.100 [2024-07-15 20:44:21.249996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.100 [2024-07-15 20:44:21.250001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.100 [2024-07-15 20:44:21.250006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.100 [2024-07-15 20:44:21.250016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.100 qpair failed and we were unable to recover it. 00:30:29.100 [2024-07-15 20:44:21.260095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.100 [2024-07-15 20:44:21.260146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.100 [2024-07-15 20:44:21.260157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.100 [2024-07-15 20:44:21.260162] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.100 [2024-07-15 20:44:21.260169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.260179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.270129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.270180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.270191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.270197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.270201] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.270211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.280160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.280216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.280227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.280235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.280239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.280249] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.290188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.290290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.290302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.290307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.290311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.290322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.300166] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.300227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.300242] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.300247] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.300251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.300261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.310097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.310152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.310163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.310168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.310173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.310183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.320268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.320325] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.320336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.320341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.320346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.320356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.330320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.330373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.330384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.330389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.330393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.330403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.340303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.340354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.340366] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.340371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.340375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.340385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.350354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.350407] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.101 [2024-07-15 20:44:21.350418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.101 [2024-07-15 20:44:21.350426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.101 [2024-07-15 20:44:21.350430] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.101 [2024-07-15 20:44:21.350441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.101 qpair failed and we were unable to recover it. 00:30:29.101 [2024-07-15 20:44:21.360374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.101 [2024-07-15 20:44:21.360428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.360440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.360445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.360449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.360459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.370366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.370426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.370437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.370443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.370448] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.370458] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.380509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.380561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.380572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.380577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.380582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.380592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.390449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.390501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.390512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.390517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.390522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.390532] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.400490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.400547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.400559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.400564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.400569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.400578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.410530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.410595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.410606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.410611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.410615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.410625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.420536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.420595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.420606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.420611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.420616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.420626] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.430564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.430620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.430631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.430636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.430641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.430651] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.440587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.440685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.440699] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.440704] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.440709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.102 [2024-07-15 20:44:21.440719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.102 qpair failed and we were unable to recover it. 00:30:29.102 [2024-07-15 20:44:21.450626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.102 [2024-07-15 20:44:21.450683] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.102 [2024-07-15 20:44:21.450694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.102 [2024-07-15 20:44:21.450699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.102 [2024-07-15 20:44:21.450704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.103 [2024-07-15 20:44:21.450713] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-07-15 20:44:21.460627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.103 [2024-07-15 20:44:21.460680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.103 [2024-07-15 20:44:21.460692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.103 [2024-07-15 20:44:21.460697] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.103 [2024-07-15 20:44:21.460701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.103 [2024-07-15 20:44:21.460711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.103 [2024-07-15 20:44:21.470662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.103 [2024-07-15 20:44:21.470722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.103 [2024-07-15 20:44:21.470734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.103 [2024-07-15 20:44:21.470739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.103 [2024-07-15 20:44:21.470743] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.103 [2024-07-15 20:44:21.470754] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.103 qpair failed and we were unable to recover it. 00:30:29.364 [2024-07-15 20:44:21.480725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.364 [2024-07-15 20:44:21.480784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.364 [2024-07-15 20:44:21.480795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.364 [2024-07-15 20:44:21.480801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.364 [2024-07-15 20:44:21.480805] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.364 [2024-07-15 20:44:21.480818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.364 qpair failed and we were unable to recover it. 00:30:29.364 [2024-07-15 20:44:21.490727] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.364 [2024-07-15 20:44:21.490785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.364 [2024-07-15 20:44:21.490796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.364 [2024-07-15 20:44:21.490802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.364 [2024-07-15 20:44:21.490806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.364 [2024-07-15 20:44:21.490816] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.364 qpair failed and we were unable to recover it. 00:30:29.364 [2024-07-15 20:44:21.500736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.364 [2024-07-15 20:44:21.500790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.500801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.500806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.500811] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.500821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.510792] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.510846] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.510857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.510863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.510867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.510877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.520780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.520847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.520858] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.520864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.520869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.520879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.530722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.530785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.530799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.530804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.530809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.530819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.540837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.540893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.540905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.540910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.540915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.540925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.550884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.550934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.550945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.550950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.550955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.550965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.560934] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.560987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.560998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.561004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.561008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.561019] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.570940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.571011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.571022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.571027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.571032] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.571045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.580979] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.581035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.581046] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.581051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.581056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.581066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.590993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.591050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.591062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.591067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.591072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.591082] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.601052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.601107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.601119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.601125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.601130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.601140] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.611084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.611146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.611157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.611163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.611168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.611179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.621098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.621166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.365 [2024-07-15 20:44:21.621178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.365 [2024-07-15 20:44:21.621184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.365 [2024-07-15 20:44:21.621189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.365 [2024-07-15 20:44:21.621200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.365 qpair failed and we were unable to recover it. 00:30:29.365 [2024-07-15 20:44:21.631121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.365 [2024-07-15 20:44:21.631178] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.631190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.631195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.631200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.631211] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.641039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.641094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.641105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.641111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.641116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.641127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.651176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.651238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.651250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.651256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.651262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.651273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.661198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.661254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.661266] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.661271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.661279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.661290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.671239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.671314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.671326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.671332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.671337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.671348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.681235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.681308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.681321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.681326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.681332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.681343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.691296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.691352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.691363] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.691368] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.691373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.691385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.701329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.701423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.701435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.701440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.701445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.701456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.711344] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.711400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.711412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.711417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.711422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.711433] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.721381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.721436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.721448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.721453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.721459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.721469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.731379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.731440] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.731452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.731457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.731462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.731473] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.366 [2024-07-15 20:44:21.741454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.366 [2024-07-15 20:44:21.741525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.366 [2024-07-15 20:44:21.741537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.366 [2024-07-15 20:44:21.741542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.366 [2024-07-15 20:44:21.741547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.366 [2024-07-15 20:44:21.741558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.366 qpair failed and we were unable to recover it. 00:30:29.627 [2024-07-15 20:44:21.751437] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.627 [2024-07-15 20:44:21.751491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.627 [2024-07-15 20:44:21.751502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.627 [2024-07-15 20:44:21.751512] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.627 [2024-07-15 20:44:21.751517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.627 [2024-07-15 20:44:21.751527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.627 qpair failed and we were unable to recover it. 00:30:29.627 [2024-07-15 20:44:21.761492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.627 [2024-07-15 20:44:21.761547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.627 [2024-07-15 20:44:21.761559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.627 [2024-07-15 20:44:21.761564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.761570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.761580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.771484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.771547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.771558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.771564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.771569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.771579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.781541] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.781613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.781625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.781630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.781635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.781646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.791494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.791546] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.791558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.791564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.791569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.791579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.801646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.801746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.801760] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.801766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.801771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.801783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.811630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.811690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.811702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.811708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.811713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.811724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.821623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.821685] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.821696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.821702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.821707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.821718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.831552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.831603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.831614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.831620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.831625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.831636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.841706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.841776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.841791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.841796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.841801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.841812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.851729] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.851785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.851797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.851802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.851808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.851819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.861829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.861893] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.861905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.861910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.861915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.861926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.871832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.871885] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.871897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.871902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.871907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.871918] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.881724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.881778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.881789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.881795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.881800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.881810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.891873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.891934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.891945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.891951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.891955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.891966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.901856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.901916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.901927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.901933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.901938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.901948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.911772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.911825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.911837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.911843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.628 [2024-07-15 20:44:21.911848] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.628 [2024-07-15 20:44:21.911860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.628 qpair failed and we were unable to recover it. 00:30:29.628 [2024-07-15 20:44:21.921898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.628 [2024-07-15 20:44:21.921949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.628 [2024-07-15 20:44:21.921961] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.628 [2024-07-15 20:44:21.921967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.921972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.921983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:21.931939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:21.932017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:21.932045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:21.932053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.932058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.932073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:21.941943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:21.942045] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:21.942064] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:21.942071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.942076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.942091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:21.951990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:21.952050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:21.952063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:21.952068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.952073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.952084] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:21.961908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:21.961960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:21.961972] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:21.961977] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.961981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.961992] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:21.971950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:21.972005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:21.972017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:21.972022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.972026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.972040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:21.981972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:21.982021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:21.982033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:21.982037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.982042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.982052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:21.992098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:21.992193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:21.992205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:21.992211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:21.992216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:21.992226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.629 [2024-07-15 20:44:22.002130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.629 [2024-07-15 20:44:22.002188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.629 [2024-07-15 20:44:22.002199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.629 [2024-07-15 20:44:22.002205] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.629 [2024-07-15 20:44:22.002209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.629 [2024-07-15 20:44:22.002220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.629 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.012169] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.012228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.012244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.012249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.012254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.012264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.022149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.022195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.022208] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.022214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.022218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.022228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.032101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.032153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.032164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.032169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.032174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.032184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.042263] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.042320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.042331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.042337] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.042341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.042352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.052292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.052355] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.052367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.052372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.052377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.052387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.062260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.062316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.062327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.062332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.062339] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.062350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.072315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.072379] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.072390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.072396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.072400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.072411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.082383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.082452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.082463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.082468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.082472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.082483] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.092398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.092495] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.092508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.092513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.092517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.092527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.102350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.891 [2024-07-15 20:44:22.102400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.891 [2024-07-15 20:44:22.102412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.891 [2024-07-15 20:44:22.102417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.891 [2024-07-15 20:44:22.102421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.891 [2024-07-15 20:44:22.102431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.891 qpair failed and we were unable to recover it. 00:30:29.891 [2024-07-15 20:44:22.112485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.112555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.112566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.112572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.112576] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.112586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.122491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.122549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.122560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.122565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.122570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.122581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.132510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.132567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.132579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.132584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.132589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.132599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.142515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.142568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.142579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.142584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.142589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.142599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.152583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.152651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.152662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.152671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.152675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.152685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.162478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.162533] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.162544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.162549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.162554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.162564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.172649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.172712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.172723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.172728] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.172733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.172743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.182624] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.182684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.182695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.182700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.182705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.182715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.192684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.192744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.192756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.192761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.192766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.192779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.202704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.202758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.202770] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.202775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.202780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.202790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.212735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.212796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.212807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.212813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.212817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.212827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.222694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.222746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.222757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.222762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.222767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.222777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.232799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.232852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.892 [2024-07-15 20:44:22.232863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.892 [2024-07-15 20:44:22.232868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.892 [2024-07-15 20:44:22.232873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.892 [2024-07-15 20:44:22.232883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.892 qpair failed and we were unable to recover it. 00:30:29.892 [2024-07-15 20:44:22.242829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.892 [2024-07-15 20:44:22.242886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.893 [2024-07-15 20:44:22.242898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.893 [2024-07-15 20:44:22.242907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.893 [2024-07-15 20:44:22.242911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.893 [2024-07-15 20:44:22.242924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-07-15 20:44:22.252889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.893 [2024-07-15 20:44:22.252949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.893 [2024-07-15 20:44:22.252962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.893 [2024-07-15 20:44:22.252967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.893 [2024-07-15 20:44:22.252971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.893 [2024-07-15 20:44:22.252981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.893 qpair failed and we were unable to recover it. 00:30:29.893 [2024-07-15 20:44:22.262849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.893 [2024-07-15 20:44:22.262900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.893 [2024-07-15 20:44:22.262912] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.893 [2024-07-15 20:44:22.262917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.893 [2024-07-15 20:44:22.262922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:29.893 [2024-07-15 20:44:22.262932] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:29.893 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.272900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.272956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.272975] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.272982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.272987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.273001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.282804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.282863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.282875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.282881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.282885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.282897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.292944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.293032] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.293044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.293049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.293055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.293065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.302828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.302884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.302897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.302902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.302906] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.302924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.312881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.312934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.312946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.312952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.312956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.312966] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.323039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.323098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.323110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.323115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.323119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.323130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.333044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.333104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.333118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.333124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.333128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.333139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.343080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.343133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.343144] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.343149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.343153] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.343164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.353119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.353171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.353182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.353188] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.353192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.353202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.363026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.363083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.363094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.363099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.363104] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.363114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.373132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.373188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.373199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.373204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.373209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.373222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.383168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.383223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.383237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.383242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.383247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.383257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.393237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.393289] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.393300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.393305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.393309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.393319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.403288] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.403346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.403357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.403362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.403367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.403377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.413267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.413322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.155 [2024-07-15 20:44:22.413333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.155 [2024-07-15 20:44:22.413338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.155 [2024-07-15 20:44:22.413342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.155 [2024-07-15 20:44:22.413352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.155 qpair failed and we were unable to recover it. 00:30:30.155 [2024-07-15 20:44:22.423289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.155 [2024-07-15 20:44:22.423336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.423350] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.423355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.423359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.423370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.433350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.433408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.433419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.433424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.433429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.433439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.443261] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.443314] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.443326] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.443332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.443336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.443347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.453341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.453397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.453408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.453413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.453417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.453427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.463420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.463464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.463476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.463481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.463488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.463499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.473485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.473535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.473547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.473553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.473558] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.473568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.483489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.483543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.483555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.483560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.483565] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.483575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.493465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.493525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.493536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.493542] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.493546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.493556] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.503499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.503551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.503562] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.503567] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.503571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.503581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.513463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.513522] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.513534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.513539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.513544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.513554] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.156 [2024-07-15 20:44:22.523598] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.156 [2024-07-15 20:44:22.523653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.156 [2024-07-15 20:44:22.523664] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.156 [2024-07-15 20:44:22.523670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.156 [2024-07-15 20:44:22.523674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.156 [2024-07-15 20:44:22.523684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.156 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.533595] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.533700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.533713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.533719] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.533725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.533736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.543471] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.543521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.543532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.543538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.543542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.543553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.553671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.553724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.553735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.553747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.553752] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.553762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.563669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.563757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.563768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.563774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.563779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.563789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.573546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.573601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.573613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.573618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.573623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.573633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.583705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.583767] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.583778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.583783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.583788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.583798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.593772] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.593823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.593835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.593840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.593845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.593855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.603785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.603841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.603853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.419 [2024-07-15 20:44:22.603858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.419 [2024-07-15 20:44:22.603862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.419 [2024-07-15 20:44:22.603873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.419 qpair failed and we were unable to recover it. 00:30:30.419 [2024-07-15 20:44:22.613646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.419 [2024-07-15 20:44:22.613702] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.419 [2024-07-15 20:44:22.613713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.613718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.613723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.613733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.623782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.623826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.623837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.623843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.623847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.623857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.633874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.633927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.633938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.633943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.633947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.633958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.643911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.643971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.643989] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.643999] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.644004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.644018] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.653872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.653929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.653947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.653954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.653958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.653972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.663896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.663964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.663976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.663981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.663986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.663997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.673986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.674038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.674049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.674054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.674059] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.674069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.684020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.684147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.684159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.684164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.684169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.684179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.693861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.693913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.693926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.693931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.693936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.693947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.704017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.704066] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.704078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.704083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.704087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.704098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.714045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.714092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.714103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.714108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.714113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.714123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.724105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.724167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.724179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.724184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.724188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.724198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.734071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.734148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.734162] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.734168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.734172] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.734182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.744153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.744249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.744260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.744266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.744270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.744281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.420 qpair failed and we were unable to recover it. 00:30:30.420 [2024-07-15 20:44:22.754180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.420 [2024-07-15 20:44:22.754248] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.420 [2024-07-15 20:44:22.754259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.420 [2024-07-15 20:44:22.754264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.420 [2024-07-15 20:44:22.754269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.420 [2024-07-15 20:44:22.754279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.421 qpair failed and we were unable to recover it. 00:30:30.421 [2024-07-15 20:44:22.764248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.421 [2024-07-15 20:44:22.764334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.421 [2024-07-15 20:44:22.764345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.421 [2024-07-15 20:44:22.764352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.421 [2024-07-15 20:44:22.764356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.421 [2024-07-15 20:44:22.764366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.421 qpair failed and we were unable to recover it. 00:30:30.421 [2024-07-15 20:44:22.774219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.421 [2024-07-15 20:44:22.774298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.421 [2024-07-15 20:44:22.774310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.421 [2024-07-15 20:44:22.774315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.421 [2024-07-15 20:44:22.774319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.421 [2024-07-15 20:44:22.774333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.421 qpair failed and we were unable to recover it. 00:30:30.421 [2024-07-15 20:44:22.784239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.421 [2024-07-15 20:44:22.784288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.421 [2024-07-15 20:44:22.784300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.421 [2024-07-15 20:44:22.784305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.421 [2024-07-15 20:44:22.784309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.421 [2024-07-15 20:44:22.784319] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.421 qpair failed and we were unable to recover it. 00:30:30.421 [2024-07-15 20:44:22.794281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.421 [2024-07-15 20:44:22.794326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.421 [2024-07-15 20:44:22.794339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.421 [2024-07-15 20:44:22.794344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.421 [2024-07-15 20:44:22.794349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.421 [2024-07-15 20:44:22.794359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.421 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.804333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.804422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.804434] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.804440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.804444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.804455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.814325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.814377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.814388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.814393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.814398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.814408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.824224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.824281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.824296] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.824302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.824306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.824317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.834373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.834423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.834435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.834440] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.834444] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.834455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.844446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.844500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.844512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.844517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.844521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.844531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.854421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.854516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.854528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.854533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.854538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.854549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.864432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.864477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.864488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.864493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.864500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.864510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.683 [2024-07-15 20:44:22.874489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.683 [2024-07-15 20:44:22.874564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.683 [2024-07-15 20:44:22.874575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.683 [2024-07-15 20:44:22.874580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.683 [2024-07-15 20:44:22.874585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.683 [2024-07-15 20:44:22.874595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.683 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.884511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.884568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.884579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.884584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.884588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.884598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.894505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.894558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.894568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.894573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.894578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.894588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.904560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.904616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.904627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.904632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.904637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.904647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.914601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.914667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.914678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.914683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.914688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.914698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.924615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.924671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.924681] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.924686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.924691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.924701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.934652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.934707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.934718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.934723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.934728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.934738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.944671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.944720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.944731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.944736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.944740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.944750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.954711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.954755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.954766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.954771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.954779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.954789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.964823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.964876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.964887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.964892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.964897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.964907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.974776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.974841] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.974853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.974857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.974862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.974871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.984776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.984876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.984887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.984892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.984897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.984907] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:22.994835] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:22.994887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:22.994898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:22.994903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:22.994907] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:22.994917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:23.004879] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:23.004935] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:23.004947] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:23.004952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:23.004956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:23.004967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:23.014865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:23.014920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.684 [2024-07-15 20:44:23.014931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.684 [2024-07-15 20:44:23.014936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.684 [2024-07-15 20:44:23.014941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.684 [2024-07-15 20:44:23.014951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.684 qpair failed and we were unable to recover it. 00:30:30.684 [2024-07-15 20:44:23.024774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.684 [2024-07-15 20:44:23.024825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.685 [2024-07-15 20:44:23.024836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.685 [2024-07-15 20:44:23.024841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.685 [2024-07-15 20:44:23.024846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.685 [2024-07-15 20:44:23.024856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.685 qpair failed and we were unable to recover it. 00:30:30.685 [2024-07-15 20:44:23.034915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.685 [2024-07-15 20:44:23.034966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.685 [2024-07-15 20:44:23.034977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.685 [2024-07-15 20:44:23.034982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.685 [2024-07-15 20:44:23.034986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.685 [2024-07-15 20:44:23.034997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.685 qpair failed and we were unable to recover it. 00:30:30.685 [2024-07-15 20:44:23.044863] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.685 [2024-07-15 20:44:23.044919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.685 [2024-07-15 20:44:23.044930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.685 [2024-07-15 20:44:23.044938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.685 [2024-07-15 20:44:23.044942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.685 [2024-07-15 20:44:23.044953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.685 qpair failed and we were unable to recover it. 00:30:30.685 [2024-07-15 20:44:23.054962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.685 [2024-07-15 20:44:23.055018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.685 [2024-07-15 20:44:23.055029] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.685 [2024-07-15 20:44:23.055035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.685 [2024-07-15 20:44:23.055039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.685 [2024-07-15 20:44:23.055049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.685 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.065002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.065102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.065113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.065118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.947 [2024-07-15 20:44:23.065123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.947 [2024-07-15 20:44:23.065134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.075036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.075085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.075097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.075102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.947 [2024-07-15 20:44:23.075106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.947 [2024-07-15 20:44:23.075116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.085165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.085267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.085278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.085283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.947 [2024-07-15 20:44:23.085288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.947 [2024-07-15 20:44:23.085299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.095076] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.095131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.095142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.095147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.947 [2024-07-15 20:44:23.095152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.947 [2024-07-15 20:44:23.095162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.104987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.105058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.105070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.105075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.947 [2024-07-15 20:44:23.105080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.947 [2024-07-15 20:44:23.105090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.115132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.115223] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.115237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.115243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.947 [2024-07-15 20:44:23.115247] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.947 [2024-07-15 20:44:23.115257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.125207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.125269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.125280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.125285] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.947 [2024-07-15 20:44:23.125289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.947 [2024-07-15 20:44:23.125300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.947 qpair failed and we were unable to recover it. 00:30:30.947 [2024-07-15 20:44:23.135204] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.947 [2024-07-15 20:44:23.135264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.947 [2024-07-15 20:44:23.135278] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.947 [2024-07-15 20:44:23.135283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.135287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.135297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.145113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.145171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.145182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.145187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.145191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.145201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.155254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.155306] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.155318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.155323] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.155327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.155337] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.165199] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.165258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.165269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.165274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.165279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.165289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.175303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.175357] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.175368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.175373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.175378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.175391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.185342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.185394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.185405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.185410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.185414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.185424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.195365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.195411] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.195424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.195429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.195433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.195444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.205314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.205371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.205383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.205388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.205392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.205402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.215421] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.215479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.215490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.215496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.215500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.215510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.225443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.225494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.225508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.225513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.225517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.225528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.235490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.235543] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.235555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.235560] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.235564] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.235574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.245559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.245615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.245626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.245631] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.245636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.245646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.255518] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.255574] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.255585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.255590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.255594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.255604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.265536] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.265589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.265600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.265606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.265610] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.948 [2024-07-15 20:44:23.265625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.948 qpair failed and we were unable to recover it. 00:30:30.948 [2024-07-15 20:44:23.275554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.948 [2024-07-15 20:44:23.275607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.948 [2024-07-15 20:44:23.275618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.948 [2024-07-15 20:44:23.275623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.948 [2024-07-15 20:44:23.275628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.949 [2024-07-15 20:44:23.275638] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-07-15 20:44:23.285647] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.949 [2024-07-15 20:44:23.285736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.949 [2024-07-15 20:44:23.285747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.949 [2024-07-15 20:44:23.285753] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.949 [2024-07-15 20:44:23.285759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.949 [2024-07-15 20:44:23.285769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-07-15 20:44:23.295635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.949 [2024-07-15 20:44:23.295691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.949 [2024-07-15 20:44:23.295703] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.949 [2024-07-15 20:44:23.295708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.949 [2024-07-15 20:44:23.295714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.949 [2024-07-15 20:44:23.295724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-07-15 20:44:23.305666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.949 [2024-07-15 20:44:23.305715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.949 [2024-07-15 20:44:23.305727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.949 [2024-07-15 20:44:23.305732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.949 [2024-07-15 20:44:23.305736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.949 [2024-07-15 20:44:23.305746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.949 qpair failed and we were unable to recover it. 00:30:30.949 [2024-07-15 20:44:23.315716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.949 [2024-07-15 20:44:23.315768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.949 [2024-07-15 20:44:23.315779] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.949 [2024-07-15 20:44:23.315784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.949 [2024-07-15 20:44:23.315788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:30.949 [2024-07-15 20:44:23.315799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.949 qpair failed and we were unable to recover it. 00:30:31.211 [2024-07-15 20:44:23.325754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.211 [2024-07-15 20:44:23.325813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.211 [2024-07-15 20:44:23.325824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.211 [2024-07-15 20:44:23.325830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.211 [2024-07-15 20:44:23.325834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.211 [2024-07-15 20:44:23.325844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.211 qpair failed and we were unable to recover it. 00:30:31.211 [2024-07-15 20:44:23.335760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.211 [2024-07-15 20:44:23.335821] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.211 [2024-07-15 20:44:23.335833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.211 [2024-07-15 20:44:23.335838] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.211 [2024-07-15 20:44:23.335842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.211 [2024-07-15 20:44:23.335853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.345791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.345948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.345960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.345965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.345969] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.345979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.355822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.355877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.355895] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.355902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.355910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.355924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.365763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.365825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.365837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.365842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.365846] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.365857] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.375976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.376031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.376042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.376048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.376052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.376062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.385901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.385956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.385974] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.385980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.385985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.385999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.395946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.395999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.396017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.396023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.396028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.396042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.406028] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.406115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.406134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.406140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.406145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.406158] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.415989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.416049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.416062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.416067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.416072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.416083] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.426012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.426064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.426076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.426081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.426085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.426095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.436033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.436116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.436128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.436133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.436138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.436150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.446109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.446164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.446176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.446184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.446189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.446200] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.456090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.456146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.456158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.456163] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.456168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.456178] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.466117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.466165] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.466177] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.466182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.466187] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.466197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.476123] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.476170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.476182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.476187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.476191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.212 [2024-07-15 20:44:23.476202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.212 qpair failed and we were unable to recover it. 00:30:31.212 [2024-07-15 20:44:23.486236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.212 [2024-07-15 20:44:23.486290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.212 [2024-07-15 20:44:23.486301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.212 [2024-07-15 20:44:23.486306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.212 [2024-07-15 20:44:23.486311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.486321] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.496268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.496322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.496333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.496339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.496343] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.496354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.506244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.506348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.506360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.506366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.506370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.506380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.516257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.516311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.516322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.516328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.516332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.516343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.526331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.526388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.526399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.526404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.526408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.526418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.536309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.536370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.536384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.536389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.536394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.536404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.546346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.546393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.546404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.546410] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.546414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.546424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.556365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.556421] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.556432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.556437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.556442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.556453] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.566319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.566394] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.566405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.566411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.566415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.566425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.576298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.576354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.576365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.576370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.576375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.576388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.213 [2024-07-15 20:44:23.586449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.213 [2024-07-15 20:44:23.586508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.213 [2024-07-15 20:44:23.586519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.213 [2024-07-15 20:44:23.586524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.213 [2024-07-15 20:44:23.586528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.213 [2024-07-15 20:44:23.586538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.213 qpair failed and we were unable to recover it. 00:30:31.475 [2024-07-15 20:44:23.596475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.475 [2024-07-15 20:44:23.596527] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.475 [2024-07-15 20:44:23.596538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.475 [2024-07-15 20:44:23.596543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.475 [2024-07-15 20:44:23.596548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.475 [2024-07-15 20:44:23.596558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.475 qpair failed and we were unable to recover it. 00:30:31.475 [2024-07-15 20:44:23.606559] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.475 [2024-07-15 20:44:23.606615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.475 [2024-07-15 20:44:23.606627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.475 [2024-07-15 20:44:23.606632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.606637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.606647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.616534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.616589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.616601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.616606] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.616611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.616622] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.626466] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.626528] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.626541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.626547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.626551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.626561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.636463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.636511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.636522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.636527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.636532] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.636542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.646662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.646715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.646726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.646731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.646735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.646746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.656648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.656745] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.656757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.656762] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.656767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.656776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.666646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.666698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.666709] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.666714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.666718] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.666731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.676744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.676797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.676808] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.676813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.676818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.676827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.686776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.686864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.686875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.686880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.686885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.686895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.696749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.696856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.696868] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.696874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.696879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.696889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.706777] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.706836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.706847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.706852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.706856] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.706866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.716807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.716862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.716876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.716882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.716886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.716896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.726874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.726931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.726943] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.726948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.726952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.726962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.736849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.736909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.476 [2024-07-15 20:44:23.736927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.476 [2024-07-15 20:44:23.736934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.476 [2024-07-15 20:44:23.736938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.476 [2024-07-15 20:44:23.736952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.476 qpair failed and we were unable to recover it. 00:30:31.476 [2024-07-15 20:44:23.746935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.476 [2024-07-15 20:44:23.746990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.747009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.747015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.747020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.747034] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.756794] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.756863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.756875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.756881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.756890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.756901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.766976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.767029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.767041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.767046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.767051] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.767061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.776870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.776929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.776940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.776946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.776951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.776961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.786993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.787046] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.787057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.787063] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.787067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.787078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.797026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.797076] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.797088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.797094] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.797098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.797108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.807060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.807117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.807129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.807134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.807139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.807149] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.817105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.817161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.817173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.817178] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.817182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.817193] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.827078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.827131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.827142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.827147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.827152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.827162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.837133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.837184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.837195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.837201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.837205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.837215] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.477 [2024-07-15 20:44:23.847251] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.477 [2024-07-15 20:44:23.847352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.477 [2024-07-15 20:44:23.847365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.477 [2024-07-15 20:44:23.847372] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.477 [2024-07-15 20:44:23.847377] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.477 [2024-07-15 20:44:23.847388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.477 qpair failed and we were unable to recover it. 00:30:31.739 [2024-07-15 20:44:23.857209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.739 [2024-07-15 20:44:23.857297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.739 [2024-07-15 20:44:23.857310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.739 [2024-07-15 20:44:23.857315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.857320] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.857331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.867206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.867257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.867269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.867275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.867279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.867290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.877101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.877151] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.877164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.877170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.877174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.877185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.887312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.887371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.887383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.887388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.887392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.887403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.897258] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.897350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.897362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.897367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.897372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.897382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.907307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.907386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.907399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.907404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.907408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.907419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.917321] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.917371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.917383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.917388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.917392] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.917403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.927420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.927478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.927489] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.927494] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.927498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.927509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.937399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.937457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.937469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.937476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.937481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.937491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.947419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.947477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.947488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.947493] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.947498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.947508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.957497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.957569] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.957581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.957587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.957591] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.957602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.967472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.967521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.967532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.967537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.967542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.967552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.977511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.977567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.977578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.977583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.977587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.977597] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.987535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.987581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.987592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.987597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.987602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.987612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.740 [2024-07-15 20:44:23.997427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.740 [2024-07-15 20:44:23.997480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.740 [2024-07-15 20:44:23.997491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.740 [2024-07-15 20:44:23.997496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.740 [2024-07-15 20:44:23.997501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.740 [2024-07-15 20:44:23.997511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.740 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.007587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.007636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.007648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.007653] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.007657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.007667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.017606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.017661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.017674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.017679] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.017684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.017696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.027635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.027679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.027696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.027701] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.027705] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.027716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.037667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.037713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.037725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.037730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.037735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.037745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.047708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.047754] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.047766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.047771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.047776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.047786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.057649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.057716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.057727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.057733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.057737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.057748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.067754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.067802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.067814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.067819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.067823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.067836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.077760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.077845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.077856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.077862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.077867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.077877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.087805] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.087855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.087866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.087872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.087876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.087887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.097815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.097865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.097876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.097881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.097886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.097896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:31.741 [2024-07-15 20:44:24.107849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:31.741 [2024-07-15 20:44:24.107899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:31.741 [2024-07-15 20:44:24.107910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:31.741 [2024-07-15 20:44:24.107915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:31.741 [2024-07-15 20:44:24.107920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:31.741 [2024-07-15 20:44:24.107930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:31.741 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.117877] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.117922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.117936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.117941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.117946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.117956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.127768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.127813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.127824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.127829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.127833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.127844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.137915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.137965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.137976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.137982] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.137986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.137996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.148001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.148071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.148082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.148087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.148092] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.148102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.157983] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.158037] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.158055] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.158062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.158070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.158085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.168039] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.168090] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.168109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.168115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.168120] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.168134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.178070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.178142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.178154] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.178160] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.178164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.178175] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.188072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.188120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.188131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.188136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.188141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.188151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.198073] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.198121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.198132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.198137] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.198142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.198152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.208105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.208195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.208207] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.208212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.208217] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.208227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.218139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.218190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.218201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.218206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.218210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.218220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.003 qpair failed and we were unable to recover it. 00:30:32.003 [2024-07-15 20:44:24.228153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.003 [2024-07-15 20:44:24.228210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.003 [2024-07-15 20:44:24.228221] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.003 [2024-07-15 20:44:24.228226] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.003 [2024-07-15 20:44:24.228233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.003 [2024-07-15 20:44:24.228244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.238180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.238237] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.238249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.238254] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.238258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.238269] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.248242] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.248290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.248301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.248310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.248314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.248325] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.258256] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.258311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.258322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.258327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.258332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.258342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.268281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.268327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.268338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.268343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.268347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.268357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.278180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.278225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.278239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.278244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.278248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.278259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.288273] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.288319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.288330] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.288336] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.288341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.288351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.298354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.298404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.298415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.298421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.298425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.298435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.308367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.308414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.308425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.308430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.308435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.308445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.318404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.318497] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.318509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.318514] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.318519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.318529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.328420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.328479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.328490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.328496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.328500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.328510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.338375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.338429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.338440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.338448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.338453] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.338463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.348511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.348600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.348612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.348617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.348622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.348632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.358425] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.358472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.358483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.358488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.358493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.358503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.368548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.368595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.368606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.004 [2024-07-15 20:44:24.368611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.004 [2024-07-15 20:44:24.368615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.004 [2024-07-15 20:44:24.368625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.004 qpair failed and we were unable to recover it. 00:30:32.004 [2024-07-15 20:44:24.378476] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.004 [2024-07-15 20:44:24.378534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.004 [2024-07-15 20:44:24.378545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.005 [2024-07-15 20:44:24.378551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.005 [2024-07-15 20:44:24.378555] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.005 [2024-07-15 20:44:24.378565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.005 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.388607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.388654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.388665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.388671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.388675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.388685] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.398628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.398678] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.398689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.398694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.398699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.398709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.408654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.408705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.408716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.408721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.408726] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.408736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.418681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.418733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.418744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.418749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.418753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.418763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.428702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.428753] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.428766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.428771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.428776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.428786] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.438745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.438792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.438803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.438808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.438813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.438822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.448760] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.448810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.448821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.448826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.448830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.448841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.458746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.458796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.266 [2024-07-15 20:44:24.458807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.266 [2024-07-15 20:44:24.458812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.266 [2024-07-15 20:44:24.458817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.266 [2024-07-15 20:44:24.458826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.266 qpair failed and we were unable to recover it. 00:30:32.266 [2024-07-15 20:44:24.468826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.266 [2024-07-15 20:44:24.468890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.468901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.468906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.468911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.468924] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.478874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.478964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.478976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.478981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.478986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.478996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.488925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.489015] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.489026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.489032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.489036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.489047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.498910] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.498965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.498976] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.498981] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.498985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.498995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.508948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.508996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.509007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.509013] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.509017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.509027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.518956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.519058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.519073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.519078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.519083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.519093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.528974] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.529021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.529032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.529038] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.529042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.529052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.539002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.539057] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.539068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.539073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.539078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.539087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.548984] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.549031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.549042] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.549048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.549052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.549062] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.559066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.559155] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.559167] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.559173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.559180] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.559190] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.569085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.569130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.569141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.569147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.569151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.569162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.579126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.579191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.579202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.579208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.579212] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.579222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.589149] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.589195] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.589206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.589211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.589215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.589226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.599057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.599109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.267 [2024-07-15 20:44:24.599120] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.267 [2024-07-15 20:44:24.599126] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.267 [2024-07-15 20:44:24.599130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.267 [2024-07-15 20:44:24.599141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.267 qpair failed and we were unable to recover it. 00:30:32.267 [2024-07-15 20:44:24.609192] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.267 [2024-07-15 20:44:24.609250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.268 [2024-07-15 20:44:24.609261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.268 [2024-07-15 20:44:24.609266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.268 [2024-07-15 20:44:24.609271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.268 [2024-07-15 20:44:24.609281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.268 qpair failed and we were unable to recover it. 00:30:32.268 [2024-07-15 20:44:24.619218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.268 [2024-07-15 20:44:24.619273] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.268 [2024-07-15 20:44:24.619285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.268 [2024-07-15 20:44:24.619290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.268 [2024-07-15 20:44:24.619294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.268 [2024-07-15 20:44:24.619304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.268 qpair failed and we were unable to recover it. 00:30:32.268 [2024-07-15 20:44:24.629260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.268 [2024-07-15 20:44:24.629368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.268 [2024-07-15 20:44:24.629379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.268 [2024-07-15 20:44:24.629384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.268 [2024-07-15 20:44:24.629388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.268 [2024-07-15 20:44:24.629398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.268 qpair failed and we were unable to recover it. 00:30:32.268 [2024-07-15 20:44:24.639285] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.268 [2024-07-15 20:44:24.639336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.268 [2024-07-15 20:44:24.639347] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.268 [2024-07-15 20:44:24.639352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.268 [2024-07-15 20:44:24.639357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.268 [2024-07-15 20:44:24.639367] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.268 qpair failed and we were unable to recover it. 00:30:32.529 [2024-07-15 20:44:24.649314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.529 [2024-07-15 20:44:24.649363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.529 [2024-07-15 20:44:24.649373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.529 [2024-07-15 20:44:24.649379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.529 [2024-07-15 20:44:24.649386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.529 [2024-07-15 20:44:24.649396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.529 qpair failed and we were unable to recover it. 00:30:32.529 [2024-07-15 20:44:24.659323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.529 [2024-07-15 20:44:24.659373] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.529 [2024-07-15 20:44:24.659384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.529 [2024-07-15 20:44:24.659389] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.529 [2024-07-15 20:44:24.659394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.529 [2024-07-15 20:44:24.659404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.529 qpair failed and we were unable to recover it. 00:30:32.529 [2024-07-15 20:44:24.669362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.529 [2024-07-15 20:44:24.669409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.529 [2024-07-15 20:44:24.669420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.529 [2024-07-15 20:44:24.669425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.529 [2024-07-15 20:44:24.669429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.529 [2024-07-15 20:44:24.669439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.529 qpair failed and we were unable to recover it. 00:30:32.529 [2024-07-15 20:44:24.679408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.529 [2024-07-15 20:44:24.679457] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.529 [2024-07-15 20:44:24.679468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.529 [2024-07-15 20:44:24.679473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.529 [2024-07-15 20:44:24.679478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.679488] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.689469] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.689565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.689576] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.689581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.689585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.689595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.699468] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.699523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.699533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.699539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.699543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.699553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.709492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.709541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.709551] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.709557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.709561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.709572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.719504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.719554] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.719565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.719570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.719575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.719585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.729631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.729690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.729702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.729707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.729713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.729725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.739558] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.739613] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.739625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.739633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.739638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.739648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.749472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.749518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.749529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.749534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.749538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.749549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.759626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.759687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.759698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.759703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.759707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.759717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.769671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.769720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.769731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.769736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.769741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.769751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.779690] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.779742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.779753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.779758] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.779762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.779773] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.789718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.789765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.789776] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.789780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.789785] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.789795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.799750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.799798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.799809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.799814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.799818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.799828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.809804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.809883] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.809894] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.809899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.809903] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.809913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.819796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.530 [2024-07-15 20:44:24.819852] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.530 [2024-07-15 20:44:24.819871] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.530 [2024-07-15 20:44:24.819877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.530 [2024-07-15 20:44:24.819882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.530 [2024-07-15 20:44:24.819895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.530 qpair failed and we were unable to recover it. 00:30:32.530 [2024-07-15 20:44:24.829819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.829916] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.829932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.829938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.829942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.531 [2024-07-15 20:44:24.829954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 [2024-07-15 20:44:24.839865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.839919] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.839931] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.839936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.839941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.531 [2024-07-15 20:44:24.839951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 [2024-07-15 20:44:24.849888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.849937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.849949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.849954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.849959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.531 [2024-07-15 20:44:24.849969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 [2024-07-15 20:44:24.859862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.859914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.859925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.859931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.859935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.531 [2024-07-15 20:44:24.859946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 [2024-07-15 20:44:24.869927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.870020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.870032] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.870037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.870042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.531 [2024-07-15 20:44:24.870055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 [2024-07-15 20:44:24.879947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.879993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.880005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.880010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.880014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af8000b90 00:30:32.531 [2024-07-15 20:44:24.880025] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 [2024-07-15 20:44:24.889976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.890050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.890074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.890084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.890091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23b9a50 00:30:32.531 [2024-07-15 20:44:24.890110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 [2024-07-15 20:44:24.899901] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.531 [2024-07-15 20:44:24.899967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.531 [2024-07-15 20:44:24.899992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.531 [2024-07-15 20:44:24.900001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.531 [2024-07-15 20:44:24.900008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23b9a50 00:30:32.531 [2024-07-15 20:44:24.900027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:30:32.531 qpair failed and we were unable to recover it. 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Write completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 Read completed with error (sct=0, sc=8) 00:30:32.531 starting I/O failed 00:30:32.531 [2024-07-15 20:44:24.900869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.792 [2024-07-15 20:44:24.910079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.792 [2024-07-15 20:44:24.910210] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.792 [2024-07-15 20:44:24.910270] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.792 [2024-07-15 20:44:24.910293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.792 [2024-07-15 20:44:24.910312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b00000b90 00:30:32.792 [2024-07-15 20:44:24.910358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.792 qpair failed and we were unable to recover it. 00:30:32.792 [2024-07-15 20:44:24.920032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.792 [2024-07-15 20:44:24.920131] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.792 [2024-07-15 20:44:24.920161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.792 [2024-07-15 20:44:24.920177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.792 [2024-07-15 20:44:24.920190] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5b00000b90 00:30:32.792 [2024-07-15 20:44:24.920220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:30:32.792 qpair failed and we were unable to recover it. 00:30:32.792 [2024-07-15 20:44:24.920634] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b7800 is same with the state(5) to be set 00:30:32.792 [2024-07-15 20:44:24.930109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.792 [2024-07-15 20:44:24.930263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.792 [2024-07-15 20:44:24.930325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.792 [2024-07-15 20:44:24.930349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.792 [2024-07-15 20:44:24.930370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af0000b90 00:30:32.792 [2024-07-15 20:44:24.930424] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.792 qpair failed and we were unable to recover it. 00:30:32.792 [2024-07-15 20:44:24.940129] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:32.792 [2024-07-15 20:44:24.940226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:32.792 [2024-07-15 20:44:24.940272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:32.792 [2024-07-15 20:44:24.940288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:32.792 [2024-07-15 20:44:24.940301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f5af0000b90 00:30:32.792 [2024-07-15 20:44:24.940332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:30:32.792 qpair failed and we were unable to recover it. 00:30:32.792 [2024-07-15 20:44:24.940714] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b7800 (9): Bad file descriptor 00:30:32.792 Initializing NVMe Controllers 00:30:32.792 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.792 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:32.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:32.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:32.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:32.792 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:32.792 Initialization complete. Launching workers. 00:30:32.792 Starting thread on core 1 00:30:32.792 Starting thread on core 2 00:30:32.792 Starting thread on core 3 00:30:32.792 Starting thread on core 0 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:32.792 00:30:32.792 real 0m11.335s 00:30:32.792 user 0m21.266s 00:30:32.792 sys 0m3.874s 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:32.792 ************************************ 00:30:32.792 END TEST nvmf_target_disconnect_tc2 00:30:32.792 ************************************ 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:32.792 20:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:32.793 20:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:30:32.793 20:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:32.793 20:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:30:32.793 20:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:32.793 20:44:24 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:32.793 rmmod nvme_tcp 00:30:32.793 rmmod nvme_fabrics 00:30:32.793 rmmod nvme_keyring 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1539998 ']' 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1539998 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1539998 ']' 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1539998 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1539998 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1539998' 00:30:32.793 killing process with pid 1539998 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1539998 00:30:32.793 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1539998 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.053 20:44:25 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:34.965 20:44:27 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:34.965 00:30:34.965 real 0m22.186s 00:30:34.965 user 0m48.980s 00:30:34.965 sys 0m10.386s 00:30:34.965 20:44:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:34.965 20:44:27 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:34.965 ************************************ 00:30:34.965 END TEST nvmf_target_disconnect 00:30:34.965 ************************************ 00:30:35.226 20:44:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:35.226 20:44:27 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:30:35.226 20:44:27 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:35.226 20:44:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.226 20:44:27 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:30:35.226 00:30:35.226 real 23m23.131s 00:30:35.226 user 47m32.287s 00:30:35.226 sys 7m37.922s 00:30:35.226 20:44:27 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:35.226 20:44:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.226 ************************************ 00:30:35.226 END TEST nvmf_tcp 00:30:35.226 ************************************ 00:30:35.226 20:44:27 -- common/autotest_common.sh@1142 -- # return 0 00:30:35.226 20:44:27 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:30:35.226 20:44:27 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:35.226 20:44:27 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:35.226 20:44:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:35.226 20:44:27 -- common/autotest_common.sh@10 -- # set +x 00:30:35.226 ************************************ 00:30:35.226 START TEST spdkcli_nvmf_tcp 00:30:35.226 ************************************ 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:30:35.226 * Looking for test storage... 00:30:35.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:35.226 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1541840 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1541840 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1541840 ']' 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:35.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:35.487 20:44:27 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:35.487 [2024-07-15 20:44:27.686682] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:30:35.487 [2024-07-15 20:44:27.686758] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1541840 ] 00:30:35.487 EAL: No free 2048 kB hugepages reported on node 1 00:30:35.487 [2024-07-15 20:44:27.758281] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:35.487 [2024-07-15 20:44:27.833177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:35.487 [2024-07-15 20:44:27.833180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:36.086 20:44:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:36.086 20:44:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:30:36.086 20:44:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:30:36.086 20:44:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:36.086 20:44:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:36.347 20:44:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:30:36.347 20:44:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:30:36.347 20:44:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:30:36.347 20:44:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:36.347 20:44:28 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:36.347 20:44:28 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:30:36.347 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:30:36.347 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:30:36.347 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:30:36.347 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:30:36.347 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:30:36.347 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:30:36.347 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:36.347 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:36.347 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:30:36.347 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:30:36.347 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:30:36.347 ' 00:30:38.889 [2024-07-15 20:44:30.823825] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:39.829 [2024-07-15 20:44:31.987653] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:30:42.368 [2024-07-15 20:44:34.125939] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:30:43.750 [2024-07-15 20:44:35.963507] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:30:45.134 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:30:45.134 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:30:45.134 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:30:45.134 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:30:45.134 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:30:45.134 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:30:45.134 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:30:45.134 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:45.134 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:45.134 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:30:45.134 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:30:45.134 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:30:45.134 20:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:30:45.134 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.134 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.396 20:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:30:45.396 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:45.396 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.396 20:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:30:45.396 20:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:45.657 20:44:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:30:45.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:30:45.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:45.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:30:45.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:30:45.657 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:30:45.657 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:30:45.657 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:30:45.657 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:30:45.657 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:30:45.657 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:30:45.657 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:30:45.657 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:30:45.657 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:30:45.657 ' 00:30:50.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:30:50.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:30:50.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:50.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:30:50.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:30:50.945 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:30:50.945 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:30:50.945 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:30:50.945 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:30:50.945 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:30:50.945 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:30:50.945 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:30:50.945 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:30:50.945 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1541840 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1541840 ']' 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1541840 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1541840 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1541840' 00:30:51.207 killing process with pid 1541840 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1541840 00:30:51.207 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1541840 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1541840 ']' 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1541840 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1541840 ']' 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1541840 00:30:51.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1541840) - No such process 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1541840 is not found' 00:30:51.467 Process with pid 1541840 is not found 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:30:51.467 00:30:51.467 real 0m16.129s 00:30:51.467 user 0m33.980s 00:30:51.467 sys 0m0.778s 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:51.467 20:44:43 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:51.467 ************************************ 00:30:51.467 END TEST spdkcli_nvmf_tcp 00:30:51.467 ************************************ 00:30:51.467 20:44:43 -- common/autotest_common.sh@1142 -- # return 0 00:30:51.467 20:44:43 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:51.467 20:44:43 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:51.467 20:44:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:51.468 20:44:43 -- common/autotest_common.sh@10 -- # set +x 00:30:51.468 ************************************ 00:30:51.468 START TEST nvmf_identify_passthru 00:30:51.468 ************************************ 00:30:51.468 20:44:43 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:30:51.468 * Looking for test storage... 00:30:51.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:51.468 20:44:43 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.468 20:44:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.468 20:44:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.468 20:44:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:51.468 20:44:43 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:51.468 20:44:43 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:51.468 20:44:43 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:51.468 20:44:43 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:30:51.468 20:44:43 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:51.468 20:44:43 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:51.468 20:44:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:51.468 20:44:43 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:51.468 20:44:43 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:30:51.468 20:44:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:30:59.612 Found 0000:31:00.0 (0x8086 - 0x159b) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:30:59.612 Found 0000:31:00.1 (0x8086 - 0x159b) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:30:59.612 Found net devices under 0000:31:00.0: cvl_0_0 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:30:59.612 Found net devices under 0000:31:00.1: cvl_0_1 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.612 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:59.613 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.613 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.664 ms 00:30:59.613 00:30:59.613 --- 10.0.0.2 ping statistics --- 00:30:59.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.613 rtt min/avg/max/mdev = 0.664/0.664/0.664/0.000 ms 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.613 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.613 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.333 ms 00:30:59.613 00:30:59.613 --- 10.0.0.1 ping statistics --- 00:30:59.613 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.613 rtt min/avg/max/mdev = 0.333/0.333/0.333/0.000 ms 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:59.613 20:44:51 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:59.613 20:44:51 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:30:59.613 20:44:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:65:00.0 00:30:59.613 20:44:51 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:65:00.0 00:30:59.613 20:44:51 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:65:00.0 00:30:59.613 20:44:51 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:65:00.0 ']' 00:30:59.613 20:44:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:59.613 20:44:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:30:59.613 20:44:51 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:30:59.613 EAL: No free 2048 kB hugepages reported on node 1 00:30:59.873 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=S64GNE0R605494 00:30:59.873 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:65:00.0' -i 0 00:30:59.873 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:30:59.873 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:00.133 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.395 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=SAMSUNG 00:31:00.395 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:00.395 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:00.395 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1549267 00:31:00.395 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.395 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:00.395 20:44:52 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1549267 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1549267 ']' 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:00.395 20:44:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:00.656 [2024-07-15 20:44:52.796274] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:31:00.656 [2024-07-15 20:44:52.796332] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.656 EAL: No free 2048 kB hugepages reported on node 1 00:31:00.656 [2024-07-15 20:44:52.872599] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:00.656 [2024-07-15 20:44:52.944855] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.656 [2024-07-15 20:44:52.944893] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.656 [2024-07-15 20:44:52.944901] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.656 [2024-07-15 20:44:52.944907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.656 [2024-07-15 20:44:52.944913] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.656 [2024-07-15 20:44:52.945057] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.656 [2024-07-15 20:44:52.945200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:31:00.656 [2024-07-15 20:44:52.945362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.656 [2024-07-15 20:44:52.945362] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:31:01.228 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.228 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:31:01.228 20:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:01.228 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.228 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.228 INFO: Log level set to 20 00:31:01.228 INFO: Requests: 00:31:01.228 { 00:31:01.228 "jsonrpc": "2.0", 00:31:01.228 "method": "nvmf_set_config", 00:31:01.228 "id": 1, 00:31:01.228 "params": { 00:31:01.228 "admin_cmd_passthru": { 00:31:01.228 "identify_ctrlr": true 00:31:01.228 } 00:31:01.228 } 00:31:01.228 } 00:31:01.228 00:31:01.228 INFO: response: 00:31:01.228 { 00:31:01.228 "jsonrpc": "2.0", 00:31:01.228 "id": 1, 00:31:01.228 "result": true 00:31:01.228 } 00:31:01.228 00:31:01.228 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.228 20:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:01.228 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.228 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.228 INFO: Setting log level to 20 00:31:01.228 INFO: Setting log level to 20 00:31:01.228 INFO: Log level set to 20 00:31:01.228 INFO: Log level set to 20 00:31:01.228 INFO: Requests: 00:31:01.228 { 00:31:01.228 "jsonrpc": "2.0", 00:31:01.228 "method": "framework_start_init", 00:31:01.228 "id": 1 00:31:01.228 } 00:31:01.228 00:31:01.228 INFO: Requests: 00:31:01.228 { 00:31:01.228 "jsonrpc": "2.0", 00:31:01.228 "method": "framework_start_init", 00:31:01.228 "id": 1 00:31:01.228 } 00:31:01.228 00:31:01.490 [2024-07-15 20:44:53.647963] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:01.490 INFO: response: 00:31:01.490 { 00:31:01.490 "jsonrpc": "2.0", 00:31:01.490 "id": 1, 00:31:01.490 "result": true 00:31:01.490 } 00:31:01.490 00:31:01.490 INFO: response: 00:31:01.490 { 00:31:01.490 "jsonrpc": "2.0", 00:31:01.490 "id": 1, 00:31:01.490 "result": true 00:31:01.490 } 00:31:01.490 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.490 20:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.490 INFO: Setting log level to 40 00:31:01.490 INFO: Setting log level to 40 00:31:01.490 INFO: Setting log level to 40 00:31:01.490 [2024-07-15 20:44:53.661292] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.490 20:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.490 20:44:53 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:65:00.0 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.490 20:44:53 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.751 Nvme0n1 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.751 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.751 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.751 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.751 [2024-07-15 20:44:54.053532] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.751 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:01.751 [ 00:31:01.751 { 00:31:01.751 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:01.751 "subtype": "Discovery", 00:31:01.751 "listen_addresses": [], 00:31:01.751 "allow_any_host": true, 00:31:01.751 "hosts": [] 00:31:01.751 }, 00:31:01.751 { 00:31:01.751 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:01.751 "subtype": "NVMe", 00:31:01.751 "listen_addresses": [ 00:31:01.751 { 00:31:01.751 "trtype": "TCP", 00:31:01.751 "adrfam": "IPv4", 00:31:01.751 "traddr": "10.0.0.2", 00:31:01.751 "trsvcid": "4420" 00:31:01.751 } 00:31:01.751 ], 00:31:01.751 "allow_any_host": true, 00:31:01.751 "hosts": [], 00:31:01.751 "serial_number": "SPDK00000000000001", 00:31:01.751 "model_number": "SPDK bdev Controller", 00:31:01.751 "max_namespaces": 1, 00:31:01.751 "min_cntlid": 1, 00:31:01.751 "max_cntlid": 65519, 00:31:01.751 "namespaces": [ 00:31:01.751 { 00:31:01.751 "nsid": 1, 00:31:01.751 "bdev_name": "Nvme0n1", 00:31:01.751 "name": "Nvme0n1", 00:31:01.751 "nguid": "3634473052605494002538450000002B", 00:31:01.751 "uuid": "36344730-5260-5494-0025-38450000002b" 00:31:01.751 } 00:31:01.751 ] 00:31:01.751 } 00:31:01.751 ] 00:31:01.751 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:01.751 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:01.751 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:01.751 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:01.751 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.012 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=S64GNE0R605494 00:31:02.012 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:02.012 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:02.012 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:02.012 EAL: No free 2048 kB hugepages reported on node 1 00:31:02.273 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=SAMSUNG 00:31:02.273 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' S64GNE0R605494 '!=' S64GNE0R605494 ']' 00:31:02.273 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' SAMSUNG '!=' SAMSUNG ']' 00:31:02.273 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:02.273 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:02.273 20:44:54 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:02.273 rmmod nvme_tcp 00:31:02.273 rmmod nvme_fabrics 00:31:02.273 rmmod nvme_keyring 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1549267 ']' 00:31:02.273 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1549267 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1549267 ']' 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1549267 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1549267 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:02.273 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:02.274 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1549267' 00:31:02.274 killing process with pid 1549267 00:31:02.274 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1549267 00:31:02.274 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1549267 00:31:02.535 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:02.535 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:02.535 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:02.535 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:02.535 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:02.535 20:44:54 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:02.535 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:02.535 20:44:54 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.081 20:44:56 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:05.081 00:31:05.081 real 0m13.214s 00:31:05.081 user 0m9.960s 00:31:05.081 sys 0m6.569s 00:31:05.081 20:44:56 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:05.081 20:44:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:31:05.081 ************************************ 00:31:05.081 END TEST nvmf_identify_passthru 00:31:05.081 ************************************ 00:31:05.081 20:44:56 -- common/autotest_common.sh@1142 -- # return 0 00:31:05.081 20:44:56 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:05.081 20:44:56 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:05.081 20:44:56 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:05.081 20:44:56 -- common/autotest_common.sh@10 -- # set +x 00:31:05.081 ************************************ 00:31:05.081 START TEST nvmf_dif 00:31:05.081 ************************************ 00:31:05.081 20:44:56 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:31:05.081 * Looking for test storage... 00:31:05.081 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:05.081 20:44:57 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:05.081 20:44:57 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:05.081 20:44:57 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:05.081 20:44:57 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:05.081 20:44:57 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.081 20:44:57 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.081 20:44:57 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.081 20:44:57 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:31:05.081 20:44:57 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:05.081 20:44:57 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:05.082 20:44:57 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:31:05.082 20:44:57 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:05.082 20:44:57 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:05.082 20:44:57 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:31:05.082 20:44:57 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:05.082 20:44:57 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:05.082 20:44:57 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:05.082 20:44:57 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:31:05.082 20:44:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:31:13.224 Found 0000:31:00.0 (0x8086 - 0x159b) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:31:13.224 Found 0000:31:00.1 (0x8086 - 0x159b) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:31:13.224 Found net devices under 0000:31:00.0: cvl_0_0 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:31:13.224 Found net devices under 0000:31:00.1: cvl_0_1 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:13.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:13.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.660 ms 00:31:13.224 00:31:13.224 --- 10.0.0.2 ping statistics --- 00:31:13.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.224 rtt min/avg/max/mdev = 0.660/0.660/0.660/0.000 ms 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:13.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:13.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:31:13.224 00:31:13.224 --- 10.0.0.1 ping statistics --- 00:31:13.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:13.224 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:31:13.224 20:45:05 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:31:17.430 0000:80:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:80:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:80:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:80:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:80:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:80:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:80:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:80:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:00:01.6 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:65:00.0 (144d a80a): Already using the vfio-pci driver 00:31:17.430 0000:00:01.7 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:00:01.4 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:00:01.5 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:00:01.2 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:00:01.3 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:00:01.0 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 0000:00:01.1 (8086 0b00): Already using the vfio-pci driver 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:17.430 20:45:09 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:17.430 20:45:09 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1556293 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1556293 00:31:17.430 20:45:09 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1556293 ']' 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:17.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:17.430 20:45:09 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:17.430 [2024-07-15 20:45:09.386269] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:31:17.430 [2024-07-15 20:45:09.386326] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:17.430 EAL: No free 2048 kB hugepages reported on node 1 00:31:17.430 [2024-07-15 20:45:09.464490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:17.430 [2024-07-15 20:45:09.537044] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:17.430 [2024-07-15 20:45:09.537084] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:17.430 [2024-07-15 20:45:09.537092] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:17.430 [2024-07-15 20:45:09.537099] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:17.430 [2024-07-15 20:45:09.537105] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:17.430 [2024-07-15 20:45:09.537131] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:31:18.003 20:45:10 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 20:45:10 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:18.003 20:45:10 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:31:18.003 20:45:10 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 [2024-07-15 20:45:10.200078] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.003 20:45:10 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:18.003 20:45:10 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 ************************************ 00:31:18.003 START TEST fio_dif_1_default 00:31:18.003 ************************************ 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 bdev_null0 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:18.003 [2024-07-15 20:45:10.284411] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:18.003 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:18.003 { 00:31:18.003 "params": { 00:31:18.003 "name": "Nvme$subsystem", 00:31:18.003 "trtype": "$TEST_TRANSPORT", 00:31:18.003 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:18.004 "adrfam": "ipv4", 00:31:18.004 "trsvcid": "$NVMF_PORT", 00:31:18.004 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:18.004 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:18.004 "hdgst": ${hdgst:-false}, 00:31:18.004 "ddgst": ${ddgst:-false} 00:31:18.004 }, 00:31:18.004 "method": "bdev_nvme_attach_controller" 00:31:18.004 } 00:31:18.004 EOF 00:31:18.004 )") 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:18.004 "params": { 00:31:18.004 "name": "Nvme0", 00:31:18.004 "trtype": "tcp", 00:31:18.004 "traddr": "10.0.0.2", 00:31:18.004 "adrfam": "ipv4", 00:31:18.004 "trsvcid": "4420", 00:31:18.004 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:18.004 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:18.004 "hdgst": false, 00:31:18.004 "ddgst": false 00:31:18.004 }, 00:31:18.004 "method": "bdev_nvme_attach_controller" 00:31:18.004 }' 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:18.004 20:45:10 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:18.608 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:18.608 fio-3.35 00:31:18.608 Starting 1 thread 00:31:18.608 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.817 00:31:30.817 filename0: (groupid=0, jobs=1): err= 0: pid=1556999: Mon Jul 15 20:45:21 2024 00:31:30.817 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:31:30.817 slat (nsec): min=5418, max=52557, avg=6365.01, stdev=2152.60 00:31:30.817 clat (usec): min=41100, max=42790, avg=41981.81, stdev=81.21 00:31:30.817 lat (usec): min=41105, max=42826, avg=41988.18, stdev=81.67 00:31:30.817 clat percentiles (usec): 00:31:30.817 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:31:30.817 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:30.817 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:30.817 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:31:30.817 | 99.99th=[42730] 00:31:30.817 bw ( KiB/s): min= 352, max= 384, per=99.75%, avg=380.80, stdev= 9.85, samples=20 00:31:30.817 iops : min= 88, max= 96, avg=95.20, stdev= 2.46, samples=20 00:31:30.817 lat (msec) : 50=100.00% 00:31:30.817 cpu : usr=95.11%, sys=4.70%, ctx=14, majf=0, minf=225 00:31:30.817 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:30.817 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.817 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:30.817 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:30.817 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:30.817 00:31:30.817 Run status group 0 (all jobs): 00:31:30.817 READ: bw=381KiB/s (390kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=3824KiB (3916kB), run=10038-10038msec 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 00:31:30.817 real 0m11.283s 00:31:30.817 user 0m26.624s 00:31:30.817 sys 0m0.821s 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 ************************************ 00:31:30.817 END TEST fio_dif_1_default 00:31:30.817 ************************************ 00:31:30.817 20:45:21 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:30.817 20:45:21 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:30.817 20:45:21 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:30.817 20:45:21 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 ************************************ 00:31:30.817 START TEST fio_dif_1_multi_subsystems 00:31:30.817 ************************************ 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 bdev_null0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 [2024-07-15 20:45:21.646003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 bdev_null1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.817 { 00:31:30.817 "params": { 00:31:30.817 "name": "Nvme$subsystem", 00:31:30.817 "trtype": "$TEST_TRANSPORT", 00:31:30.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.817 "adrfam": "ipv4", 00:31:30.817 "trsvcid": "$NVMF_PORT", 00:31:30.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.817 "hdgst": ${hdgst:-false}, 00:31:30.817 "ddgst": ${ddgst:-false} 00:31:30.817 }, 00:31:30.817 "method": "bdev_nvme_attach_controller" 00:31:30.817 } 00:31:30.817 EOF 00:31:30.817 )") 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:30.817 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:30.818 { 00:31:30.818 "params": { 00:31:30.818 "name": "Nvme$subsystem", 00:31:30.818 "trtype": "$TEST_TRANSPORT", 00:31:30.818 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:30.818 "adrfam": "ipv4", 00:31:30.818 "trsvcid": "$NVMF_PORT", 00:31:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:30.818 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:30.818 "hdgst": ${hdgst:-false}, 00:31:30.818 "ddgst": ${ddgst:-false} 00:31:30.818 }, 00:31:30.818 "method": "bdev_nvme_attach_controller" 00:31:30.818 } 00:31:30.818 EOF 00:31:30.818 )") 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:30.818 "params": { 00:31:30.818 "name": "Nvme0", 00:31:30.818 "trtype": "tcp", 00:31:30.818 "traddr": "10.0.0.2", 00:31:30.818 "adrfam": "ipv4", 00:31:30.818 "trsvcid": "4420", 00:31:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:30.818 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:30.818 "hdgst": false, 00:31:30.818 "ddgst": false 00:31:30.818 }, 00:31:30.818 "method": "bdev_nvme_attach_controller" 00:31:30.818 },{ 00:31:30.818 "params": { 00:31:30.818 "name": "Nvme1", 00:31:30.818 "trtype": "tcp", 00:31:30.818 "traddr": "10.0.0.2", 00:31:30.818 "adrfam": "ipv4", 00:31:30.818 "trsvcid": "4420", 00:31:30.818 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:30.818 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:30.818 "hdgst": false, 00:31:30.818 "ddgst": false 00:31:30.818 }, 00:31:30.818 "method": "bdev_nvme_attach_controller" 00:31:30.818 }' 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:30.818 20:45:21 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:30.818 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:30.818 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:30.818 fio-3.35 00:31:30.818 Starting 2 threads 00:31:30.818 EAL: No free 2048 kB hugepages reported on node 1 00:31:40.796 00:31:40.796 filename0: (groupid=0, jobs=1): err= 0: pid=1559218: Mon Jul 15 20:45:32 2024 00:31:40.797 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10040msec) 00:31:40.797 slat (nsec): min=5419, max=36661, avg=7215.65, stdev=4086.57 00:31:40.797 clat (usec): min=41097, max=43051, avg=41988.76, stdev=131.36 00:31:40.797 lat (usec): min=41102, max=43059, avg=41995.97, stdev=131.51 00:31:40.797 clat percentiles (usec): 00:31:40.797 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:40.797 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:40.797 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:40.797 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:31:40.797 | 99.99th=[43254] 00:31:40.797 bw ( KiB/s): min= 351, max= 384, per=49.88%, avg=380.75, stdev=10.00, samples=20 00:31:40.797 iops : min= 87, max= 96, avg=95.15, stdev= 2.62, samples=20 00:31:40.797 lat (msec) : 50=100.00% 00:31:40.797 cpu : usr=96.99%, sys=2.81%, ctx=14, majf=0, minf=88 00:31:40.797 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.797 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.797 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:40.797 filename1: (groupid=0, jobs=1): err= 0: pid=1559219: Mon Jul 15 20:45:32 2024 00:31:40.797 read: IOPS=95, BW=381KiB/s (390kB/s)(3824KiB/10038msec) 00:31:40.797 slat (nsec): min=5413, max=32847, avg=7119.81, stdev=4067.67 00:31:40.797 clat (usec): min=41036, max=43003, avg=41980.36, stdev=154.83 00:31:40.797 lat (usec): min=41041, max=43036, avg=41987.48, stdev=155.38 00:31:40.797 clat percentiles (usec): 00:31:40.797 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:31:40.797 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:31:40.797 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:31:40.797 | 99.00th=[42730], 99.50th=[42730], 99.90th=[43254], 99.95th=[43254], 00:31:40.797 | 99.99th=[43254] 00:31:40.797 bw ( KiB/s): min= 351, max= 384, per=49.88%, avg=380.75, stdev=10.00, samples=20 00:31:40.797 iops : min= 87, max= 96, avg=95.15, stdev= 2.62, samples=20 00:31:40.797 lat (msec) : 50=100.00% 00:31:40.797 cpu : usr=96.41%, sys=3.39%, ctx=14, majf=0, minf=158 00:31:40.797 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:40.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.797 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:40.797 issued rwts: total=956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:40.797 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:40.797 00:31:40.797 Run status group 0 (all jobs): 00:31:40.797 READ: bw=762KiB/s (780kB/s), 381KiB/s-381KiB/s (390kB/s-390kB/s), io=7648KiB (7832kB), run=10038-10040msec 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.797 00:31:40.797 real 0m11.332s 00:31:40.797 user 0m33.559s 00:31:40.797 sys 0m0.953s 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:40.797 20:45:32 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 ************************************ 00:31:40.797 END TEST fio_dif_1_multi_subsystems 00:31:40.797 ************************************ 00:31:40.797 20:45:32 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:31:40.797 20:45:32 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:31:40.797 20:45:32 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:31:40.797 20:45:32 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:40.797 20:45:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 ************************************ 00:31:40.797 START TEST fio_dif_rand_params 00:31:40.797 ************************************ 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.797 bdev_null0 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.797 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:40.798 [2024-07-15 20:45:33.056192] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:40.798 { 00:31:40.798 "params": { 00:31:40.798 "name": "Nvme$subsystem", 00:31:40.798 "trtype": "$TEST_TRANSPORT", 00:31:40.798 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:40.798 "adrfam": "ipv4", 00:31:40.798 "trsvcid": "$NVMF_PORT", 00:31:40.798 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:40.798 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:40.798 "hdgst": ${hdgst:-false}, 00:31:40.798 "ddgst": ${ddgst:-false} 00:31:40.798 }, 00:31:40.798 "method": "bdev_nvme_attach_controller" 00:31:40.798 } 00:31:40.798 EOF 00:31:40.798 )") 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:40.798 "params": { 00:31:40.798 "name": "Nvme0", 00:31:40.798 "trtype": "tcp", 00:31:40.798 "traddr": "10.0.0.2", 00:31:40.798 "adrfam": "ipv4", 00:31:40.798 "trsvcid": "4420", 00:31:40.798 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:40.798 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:40.798 "hdgst": false, 00:31:40.798 "ddgst": false 00:31:40.798 }, 00:31:40.798 "method": "bdev_nvme_attach_controller" 00:31:40.798 }' 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:40.798 20:45:33 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:41.474 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:31:41.474 ... 00:31:41.474 fio-3.35 00:31:41.474 Starting 3 threads 00:31:41.474 EAL: No free 2048 kB hugepages reported on node 1 00:31:46.746 00:31:46.746 filename0: (groupid=0, jobs=1): err= 0: pid=1561700: Mon Jul 15 20:45:39 2024 00:31:46.746 read: IOPS=201, BW=25.1MiB/s (26.4MB/s)(126MiB/5006msec) 00:31:46.746 slat (nsec): min=5467, max=33163, avg=7977.55, stdev=1932.92 00:31:46.746 clat (usec): min=5123, max=56716, avg=14898.09, stdev=13881.90 00:31:46.746 lat (usec): min=5132, max=56725, avg=14906.07, stdev=13881.79 00:31:46.746 clat percentiles (usec): 00:31:46.746 | 1.00th=[ 5604], 5.00th=[ 6390], 10.00th=[ 7373], 20.00th=[ 8291], 00:31:46.746 | 30.00th=[ 8848], 40.00th=[ 9372], 50.00th=[ 9896], 60.00th=[10552], 00:31:46.746 | 70.00th=[11338], 80.00th=[12256], 90.00th=[49546], 95.00th=[51119], 00:31:46.746 | 99.00th=[52691], 99.50th=[54789], 99.90th=[55837], 99.95th=[56886], 00:31:46.746 | 99.99th=[56886] 00:31:46.746 bw ( KiB/s): min=19968, max=34048, per=30.92%, avg=25728.00, stdev=5312.99, samples=10 00:31:46.746 iops : min= 156, max= 266, avg=201.00, stdev=41.51, samples=10 00:31:46.746 lat (msec) : 10=51.34%, 20=35.85%, 50=4.77%, 100=8.04% 00:31:46.746 cpu : usr=96.14%, sys=3.58%, ctx=11, majf=0, minf=52 00:31:46.746 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.746 issued rwts: total=1007,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:46.746 filename0: (groupid=0, jobs=1): err= 0: pid=1561701: Mon Jul 15 20:45:39 2024 00:31:46.746 read: IOPS=209, BW=26.1MiB/s (27.4MB/s)(132MiB/5041msec) 00:31:46.746 slat (nsec): min=5442, max=32264, avg=7748.21, stdev=1705.36 00:31:46.746 clat (usec): min=5490, max=90193, avg=14336.38, stdev=12264.75 00:31:46.746 lat (usec): min=5498, max=90200, avg=14344.13, stdev=12264.73 00:31:46.746 clat percentiles (usec): 00:31:46.746 | 1.00th=[ 5932], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 8356], 00:31:46.746 | 30.00th=[ 9110], 40.00th=[ 9896], 50.00th=[10683], 60.00th=[11469], 00:31:46.746 | 70.00th=[12518], 80.00th=[13960], 90.00th=[16909], 95.00th=[50594], 00:31:46.746 | 99.00th=[54789], 99.50th=[55313], 99.90th=[56361], 99.95th=[90702], 00:31:46.746 | 99.99th=[90702] 00:31:46.746 bw ( KiB/s): min=22528, max=30976, per=32.33%, avg=26905.60, stdev=2914.98, samples=10 00:31:46.746 iops : min= 176, max= 242, avg=210.20, stdev=22.77, samples=10 00:31:46.746 lat (msec) : 10=40.99%, 20=49.72%, 50=3.04%, 100=6.26% 00:31:46.746 cpu : usr=96.01%, sys=3.73%, ctx=15, majf=0, minf=133 00:31:46.746 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.746 issued rwts: total=1054,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:46.746 filename0: (groupid=0, jobs=1): err= 0: pid=1561702: Mon Jul 15 20:45:39 2024 00:31:46.746 read: IOPS=241, BW=30.2MiB/s (31.7MB/s)(152MiB/5034msec) 00:31:46.746 slat (nsec): min=5436, max=36101, avg=7929.35, stdev=1714.04 00:31:46.746 clat (usec): min=4874, max=56572, avg=12405.63, stdev=10464.52 00:31:46.746 lat (usec): min=4880, max=56578, avg=12413.56, stdev=10464.53 00:31:46.746 clat percentiles (usec): 00:31:46.746 | 1.00th=[ 5276], 5.00th=[ 5735], 10.00th=[ 6652], 20.00th=[ 7767], 00:31:46.746 | 30.00th=[ 8356], 40.00th=[ 8979], 50.00th=[ 9503], 60.00th=[10552], 00:31:46.746 | 70.00th=[11469], 80.00th=[12649], 90.00th=[14353], 95.00th=[48497], 00:31:46.746 | 99.00th=[52167], 99.50th=[53740], 99.90th=[56361], 99.95th=[56361], 00:31:46.746 | 99.99th=[56361] 00:31:46.746 bw ( KiB/s): min=22272, max=40192, per=37.32%, avg=31052.80, stdev=5590.54, samples=10 00:31:46.746 iops : min= 174, max= 314, avg=242.60, stdev=43.68, samples=10 00:31:46.746 lat (msec) : 10=55.10%, 20=38.24%, 50=2.96%, 100=3.70% 00:31:46.746 cpu : usr=95.85%, sys=3.91%, ctx=10, majf=0, minf=96 00:31:46.746 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:46.746 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.746 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:46.746 issued rwts: total=1216,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:46.746 latency : target=0, window=0, percentile=100.00%, depth=3 00:31:46.746 00:31:46.746 Run status group 0 (all jobs): 00:31:46.746 READ: bw=81.3MiB/s (85.2MB/s), 25.1MiB/s-30.2MiB/s (26.4MB/s-31.7MB/s), io=410MiB (430MB), run=5006-5041msec 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.005 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.005 bdev_null0 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 [2024-07-15 20:45:39.223698] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 bdev_null1 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 bdev_null2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:47.006 { 00:31:47.006 "params": { 00:31:47.006 "name": "Nvme$subsystem", 00:31:47.006 "trtype": "$TEST_TRANSPORT", 00:31:47.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.006 "adrfam": "ipv4", 00:31:47.006 "trsvcid": "$NVMF_PORT", 00:31:47.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.006 "hdgst": ${hdgst:-false}, 00:31:47.006 "ddgst": ${ddgst:-false} 00:31:47.006 }, 00:31:47.006 "method": "bdev_nvme_attach_controller" 00:31:47.006 } 00:31:47.006 EOF 00:31:47.006 )") 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:47.006 { 00:31:47.006 "params": { 00:31:47.006 "name": "Nvme$subsystem", 00:31:47.006 "trtype": "$TEST_TRANSPORT", 00:31:47.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.006 "adrfam": "ipv4", 00:31:47.006 "trsvcid": "$NVMF_PORT", 00:31:47.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.006 "hdgst": ${hdgst:-false}, 00:31:47.006 "ddgst": ${ddgst:-false} 00:31:47.006 }, 00:31:47.006 "method": "bdev_nvme_attach_controller" 00:31:47.006 } 00:31:47.006 EOF 00:31:47.006 )") 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:47.006 { 00:31:47.006 "params": { 00:31:47.006 "name": "Nvme$subsystem", 00:31:47.006 "trtype": "$TEST_TRANSPORT", 00:31:47.006 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:47.006 "adrfam": "ipv4", 00:31:47.006 "trsvcid": "$NVMF_PORT", 00:31:47.006 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:47.006 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:47.006 "hdgst": ${hdgst:-false}, 00:31:47.006 "ddgst": ${ddgst:-false} 00:31:47.006 }, 00:31:47.006 "method": "bdev_nvme_attach_controller" 00:31:47.006 } 00:31:47.006 EOF 00:31:47.006 )") 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:47.006 20:45:39 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:47.006 "params": { 00:31:47.006 "name": "Nvme0", 00:31:47.006 "trtype": "tcp", 00:31:47.006 "traddr": "10.0.0.2", 00:31:47.006 "adrfam": "ipv4", 00:31:47.006 "trsvcid": "4420", 00:31:47.006 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:47.006 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:47.006 "hdgst": false, 00:31:47.006 "ddgst": false 00:31:47.006 }, 00:31:47.006 "method": "bdev_nvme_attach_controller" 00:31:47.006 },{ 00:31:47.006 "params": { 00:31:47.006 "name": "Nvme1", 00:31:47.006 "trtype": "tcp", 00:31:47.007 "traddr": "10.0.0.2", 00:31:47.007 "adrfam": "ipv4", 00:31:47.007 "trsvcid": "4420", 00:31:47.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:47.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:47.007 "hdgst": false, 00:31:47.007 "ddgst": false 00:31:47.007 }, 00:31:47.007 "method": "bdev_nvme_attach_controller" 00:31:47.007 },{ 00:31:47.007 "params": { 00:31:47.007 "name": "Nvme2", 00:31:47.007 "trtype": "tcp", 00:31:47.007 "traddr": "10.0.0.2", 00:31:47.007 "adrfam": "ipv4", 00:31:47.007 "trsvcid": "4420", 00:31:47.007 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:31:47.007 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:31:47.007 "hdgst": false, 00:31:47.007 "ddgst": false 00:31:47.007 }, 00:31:47.007 "method": "bdev_nvme_attach_controller" 00:31:47.007 }' 00:31:47.007 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:47.007 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:47.007 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:47.007 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:47.007 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:47.007 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:47.289 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:47.289 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:47.289 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:47.289 20:45:39 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:47.551 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:47.551 ... 00:31:47.551 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:47.551 ... 00:31:47.551 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:31:47.551 ... 00:31:47.551 fio-3.35 00:31:47.551 Starting 24 threads 00:31:47.551 EAL: No free 2048 kB hugepages reported on node 1 00:31:59.804 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563055: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=708, BW=2834KiB/s (2902kB/s)(27.7MiB/10018msec) 00:31:59.804 slat (nsec): min=5580, max=78618, avg=6425.57, stdev=2707.52 00:31:59.804 clat (usec): min=2134, max=40141, avg=22535.96, stdev=3836.23 00:31:59.804 lat (usec): min=2159, max=40147, avg=22542.38, stdev=3835.33 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[ 9503], 5.00th=[16712], 10.00th=[20055], 20.00th=[21103], 00:31:59.804 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[22414], 00:31:59.804 | 70.00th=[22938], 80.00th=[23462], 90.00th=[27132], 95.00th=[32375], 00:31:59.804 | 99.00th=[33424], 99.50th=[34341], 99.90th=[39060], 99.95th=[40109], 00:31:59.804 | 99.99th=[40109] 00:31:59.804 bw ( KiB/s): min= 2693, max= 3280, per=5.97%, avg=2835.05, stdev=113.00, samples=20 00:31:59.804 iops : min= 673, max= 820, avg=708.65, stdev=28.27, samples=20 00:31:59.804 lat (msec) : 4=0.13%, 10=0.94%, 20=8.93%, 50=90.00% 00:31:59.804 cpu : usr=99.14%, sys=0.54%, ctx=89, majf=0, minf=118 00:31:59.804 IO depths : 1=0.1%, 2=0.1%, 4=6.2%, 8=80.9%, 16=12.8%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=88.8%, 8=6.1%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=7098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563056: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10005msec) 00:31:59.804 slat (nsec): min=5632, max=63497, avg=14356.31, stdev=9352.60 00:31:59.804 clat (usec): min=10337, max=63310, avg=32995.84, stdev=2523.25 00:31:59.804 lat (usec): min=10359, max=63327, avg=33010.19, stdev=2523.15 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[34866], 99.50th=[35914], 99.90th=[63177], 99.95th=[63177], 00:31:59.804 | 99.99th=[63177] 00:31:59.804 bw ( KiB/s): min= 1795, max= 2048, per=4.04%, avg=1919.32, stdev=42.21, samples=19 00:31:59.804 iops : min= 448, max= 512, avg=479.79, stdev=10.67, samples=19 00:31:59.804 lat (msec) : 20=0.70%, 50=98.97%, 100=0.33% 00:31:59.804 cpu : usr=99.04%, sys=0.62%, ctx=60, majf=0, minf=45 00:31:59.804 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563057: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10015msec) 00:31:59.804 slat (nsec): min=5652, max=65046, avg=15882.01, stdev=10677.53 00:31:59.804 clat (usec): min=20070, max=35720, avg=32922.47, stdev=1330.31 00:31:59.804 lat (usec): min=20102, max=35726, avg=32938.35, stdev=1330.28 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[27395], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:31:59.804 | 99.99th=[35914] 00:31:59.804 bw ( KiB/s): min= 1916, max= 2048, per=4.07%, avg=1932.58, stdev=39.83, samples=19 00:31:59.804 iops : min= 479, max= 512, avg=483.11, stdev= 9.84, samples=19 00:31:59.804 lat (msec) : 50=100.00% 00:31:59.804 cpu : usr=98.77%, sys=0.81%, ctx=90, majf=0, minf=35 00:31:59.804 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563058: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=486, BW=1945KiB/s (1991kB/s)(19.0MiB/10005msec) 00:31:59.804 slat (nsec): min=5583, max=89301, avg=12939.72, stdev=10384.37 00:31:59.804 clat (usec): min=10278, max=63048, avg=32855.50, stdev=5257.71 00:31:59.804 lat (usec): min=10284, max=63064, avg=32868.44, stdev=5257.71 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[19530], 5.00th=[24511], 10.00th=[26870], 20.00th=[29230], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.804 | 70.00th=[33817], 80.00th=[34341], 90.00th=[38536], 95.00th=[41157], 00:31:59.804 | 99.00th=[49546], 99.50th=[51119], 99.90th=[63177], 99.95th=[63177], 00:31:59.804 | 99.99th=[63177] 00:31:59.804 bw ( KiB/s): min= 1728, max= 2064, per=4.07%, avg=1933.68, stdev=76.64, samples=19 00:31:59.804 iops : min= 432, max= 516, avg=483.42, stdev=19.16, samples=19 00:31:59.804 lat (msec) : 20=1.32%, 50=97.88%, 100=0.80% 00:31:59.804 cpu : usr=98.10%, sys=1.12%, ctx=154, majf=0, minf=56 00:31:59.804 IO depths : 1=0.1%, 2=0.2%, 4=3.4%, 8=80.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=89.3%, 8=8.7%, 16=2.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563059: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10019msec) 00:31:59.804 slat (nsec): min=5608, max=60773, avg=16120.06, stdev=9262.66 00:31:59.804 clat (usec): min=20075, max=43740, avg=33015.50, stdev=1135.27 00:31:59.804 lat (usec): min=20085, max=43769, avg=33031.62, stdev=1135.24 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[34866], 99.50th=[35390], 99.90th=[43779], 99.95th=[43779], 00:31:59.804 | 99.99th=[43779] 00:31:59.804 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.11, stdev=51.90, samples=19 00:31:59.804 iops : min= 448, max= 512, avg=481.53, stdev=12.98, samples=19 00:31:59.804 lat (msec) : 50=100.00% 00:31:59.804 cpu : usr=99.04%, sys=0.71%, ctx=12, majf=0, minf=51 00:31:59.804 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563060: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=483, BW=1932KiB/s (1979kB/s)(18.9MiB/10003msec) 00:31:59.804 slat (nsec): min=5598, max=50321, avg=9727.77, stdev=6033.54 00:31:59.804 clat (usec): min=18202, max=48498, avg=33043.59, stdev=2875.77 00:31:59.804 lat (usec): min=18208, max=48517, avg=33053.31, stdev=2876.10 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[22676], 5.00th=[32113], 10.00th=[32637], 20.00th=[32637], 00:31:59.804 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.804 | 70.00th=[33162], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:31:59.804 | 99.00th=[43779], 99.50th=[44303], 99.90th=[48497], 99.95th=[48497], 00:31:59.804 | 99.99th=[48497] 00:31:59.804 bw ( KiB/s): min= 1792, max= 2032, per=4.07%, avg=1932.37, stdev=53.15, samples=19 00:31:59.804 iops : min= 448, max= 508, avg=483.05, stdev=13.21, samples=19 00:31:59.804 lat (msec) : 20=0.37%, 50=99.63% 00:31:59.804 cpu : usr=98.69%, sys=1.03%, ctx=49, majf=0, minf=59 00:31:59.804 IO depths : 1=4.7%, 2=10.9%, 4=25.0%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563061: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=482, BW=1929KiB/s (1975kB/s)(18.9MiB/10019msec) 00:31:59.804 slat (nsec): min=5666, max=57881, avg=16101.66, stdev=9377.73 00:31:59.804 clat (usec): min=23617, max=50877, avg=33037.90, stdev=1439.25 00:31:59.804 lat (usec): min=23626, max=50894, avg=33054.00, stdev=1438.67 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[31589], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[34866], 99.50th=[34866], 99.90th=[50594], 99.95th=[51119], 00:31:59.804 | 99.99th=[51119] 00:31:59.804 bw ( KiB/s): min= 1795, max= 2048, per=4.05%, avg=1925.75, stdev=50.16, samples=20 00:31:59.804 iops : min= 448, max= 512, avg=481.40, stdev=12.64, samples=20 00:31:59.804 lat (msec) : 50=99.67%, 100=0.33% 00:31:59.804 cpu : usr=98.97%, sys=0.76%, ctx=14, majf=0, minf=47 00:31:59.804 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename0: (groupid=0, jobs=1): err= 0: pid=1563063: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10009msec) 00:31:59.804 slat (nsec): min=5551, max=62691, avg=16945.49, stdev=9723.90 00:31:59.804 clat (usec): min=12574, max=56363, avg=32984.84, stdev=2043.30 00:31:59.804 lat (usec): min=12597, max=56385, avg=33001.78, stdev=2043.25 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[31327], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[34866], 99.50th=[35914], 99.90th=[56361], 99.95th=[56361], 00:31:59.804 | 99.99th=[56361] 00:31:59.804 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.32, stdev=51.87, samples=19 00:31:59.804 iops : min= 448, max= 512, avg=481.58, stdev=12.97, samples=19 00:31:59.804 lat (msec) : 20=0.56%, 50=99.11%, 100=0.33% 00:31:59.804 cpu : usr=97.23%, sys=1.49%, ctx=136, majf=0, minf=38 00:31:59.804 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename1: (groupid=0, jobs=1): err= 0: pid=1563064: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=483, BW=1932KiB/s (1979kB/s)(18.9MiB/10003msec) 00:31:59.804 slat (nsec): min=5301, max=70269, avg=16965.18, stdev=11789.99 00:31:59.804 clat (usec): min=20587, max=48511, avg=32979.64, stdev=1487.51 00:31:59.804 lat (usec): min=20594, max=48526, avg=32996.60, stdev=1487.30 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[29230], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[34866], 99.50th=[35914], 99.90th=[48497], 99.95th=[48497], 00:31:59.804 | 99.99th=[48497] 00:31:59.804 bw ( KiB/s): min= 1792, max= 2048, per=4.07%, avg=1932.37, stdev=58.42, samples=19 00:31:59.804 iops : min= 448, max= 512, avg=483.05, stdev=14.53, samples=19 00:31:59.804 lat (msec) : 50=100.00% 00:31:59.804 cpu : usr=98.59%, sys=1.01%, ctx=117, majf=0, minf=42 00:31:59.804 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename1: (groupid=0, jobs=1): err= 0: pid=1563065: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=482, BW=1931KiB/s (1977kB/s)(18.9MiB/10009msec) 00:31:59.804 slat (nsec): min=5565, max=65189, avg=18544.82, stdev=11634.80 00:31:59.804 clat (usec): min=8821, max=54846, avg=32979.39, stdev=2066.75 00:31:59.804 lat (usec): min=8827, max=54863, avg=32997.93, stdev=2066.82 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[30802], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[35390], 99.50th=[35914], 99.90th=[54789], 99.95th=[54789], 00:31:59.804 | 99.99th=[54789] 00:31:59.804 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1925.05, stdev=62.35, samples=19 00:31:59.804 iops : min= 448, max= 512, avg=481.26, stdev=15.59, samples=19 00:31:59.804 lat (msec) : 10=0.04%, 20=0.33%, 50=99.25%, 100=0.37% 00:31:59.804 cpu : usr=99.18%, sys=0.55%, ctx=9, majf=0, minf=39 00:31:59.804 IO depths : 1=3.2%, 2=9.5%, 4=24.9%, 8=53.1%, 16=9.3%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename1: (groupid=0, jobs=1): err= 0: pid=1563066: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10005msec) 00:31:59.804 slat (nsec): min=5655, max=68521, avg=19417.77, stdev=11599.95 00:31:59.804 clat (usec): min=23248, max=36368, avg=32951.82, stdev=928.03 00:31:59.804 lat (usec): min=23258, max=36399, avg=32971.24, stdev=928.13 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[34866], 99.50th=[34866], 99.90th=[36439], 99.95th=[36439], 00:31:59.804 | 99.99th=[36439] 00:31:59.804 bw ( KiB/s): min= 1916, max= 2048, per=4.07%, avg=1932.63, stdev=40.69, samples=19 00:31:59.804 iops : min= 479, max= 512, avg=483.16, stdev=10.17, samples=19 00:31:59.804 lat (msec) : 50=100.00% 00:31:59.804 cpu : usr=99.17%, sys=0.55%, ctx=13, majf=0, minf=60 00:31:59.804 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename1: (groupid=0, jobs=1): err= 0: pid=1563067: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.8MiB/10005msec) 00:31:59.804 slat (nsec): min=5563, max=67150, avg=18634.37, stdev=11709.91 00:31:59.804 clat (usec): min=8913, max=63105, avg=33038.57, stdev=2747.78 00:31:59.804 lat (usec): min=8918, max=63125, avg=33057.21, stdev=2747.90 00:31:59.804 clat percentiles (usec): 00:31:59.804 | 1.00th=[26084], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.804 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.804 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.804 | 99.00th=[40633], 99.50th=[43779], 99.90th=[63177], 99.95th=[63177], 00:31:59.804 | 99.99th=[63177] 00:31:59.804 bw ( KiB/s): min= 1792, max= 2027, per=4.04%, avg=1919.32, stdev=39.88, samples=19 00:31:59.804 iops : min= 448, max= 506, avg=479.79, stdev= 9.86, samples=19 00:31:59.804 lat (msec) : 10=0.12%, 20=0.41%, 50=98.96%, 100=0.50% 00:31:59.804 cpu : usr=99.26%, sys=0.46%, ctx=9, majf=0, minf=75 00:31:59.804 IO depths : 1=1.2%, 2=7.3%, 4=24.6%, 8=55.5%, 16=11.4%, 32=0.0%, >=64=0.0% 00:31:59.804 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 complete : 0=0.0%, 4=94.3%, 8=0.1%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.804 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.804 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.804 filename1: (groupid=0, jobs=1): err= 0: pid=1563068: Mon Jul 15 20:45:50 2024 00:31:59.804 read: IOPS=484, BW=1936KiB/s (1983kB/s)(19.0MiB/10024msec) 00:31:59.804 slat (nsec): min=5584, max=59369, avg=12529.75, stdev=9332.75 00:31:59.805 clat (usec): min=16739, max=55567, avg=32956.54, stdev=5180.45 00:31:59.805 lat (usec): min=16749, max=55587, avg=32969.07, stdev=5181.05 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[21103], 5.00th=[22676], 10.00th=[26608], 20.00th=[29492], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.805 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39060], 95.00th=[43779], 00:31:59.805 | 99.00th=[47449], 99.50th=[50070], 99.90th=[52691], 99.95th=[55313], 00:31:59.805 | 99.99th=[55313] 00:31:59.805 bw ( KiB/s): min= 1792, max= 2192, per=4.08%, avg=1941.26, stdev=107.01, samples=19 00:31:59.805 iops : min= 448, max= 548, avg=485.32, stdev=26.75, samples=19 00:31:59.805 lat (msec) : 20=0.27%, 50=99.24%, 100=0.49% 00:31:59.805 cpu : usr=99.03%, sys=0.69%, ctx=10, majf=0, minf=64 00:31:59.805 IO depths : 1=1.3%, 2=2.6%, 4=9.0%, 8=73.4%, 16=13.7%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=90.4%, 8=6.5%, 16=3.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4852,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename1: (groupid=0, jobs=1): err= 0: pid=1563069: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=487, BW=1951KiB/s (1998kB/s)(19.1MiB/10004msec) 00:31:59.805 slat (nsec): min=5632, max=70624, avg=14108.65, stdev=10356.63 00:31:59.805 clat (usec): min=4489, max=35571, avg=32685.37, stdev=2735.41 00:31:59.805 lat (usec): min=4505, max=35578, avg=32699.48, stdev=2735.38 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[17433], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:59.805 | 99.99th=[35390] 00:31:59.805 bw ( KiB/s): min= 1916, max= 2176, per=4.11%, avg=1953.05, stdev=72.26, samples=19 00:31:59.805 iops : min= 479, max= 544, avg=488.26, stdev=18.06, samples=19 00:31:59.805 lat (msec) : 10=0.37%, 20=0.66%, 50=98.98% 00:31:59.805 cpu : usr=97.36%, sys=1.45%, ctx=77, majf=0, minf=50 00:31:59.805 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename1: (groupid=0, jobs=1): err= 0: pid=1563070: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10005msec) 00:31:59.805 slat (nsec): min=5647, max=61233, avg=14590.16, stdev=9358.72 00:31:59.805 clat (usec): min=10417, max=63140, avg=33004.47, stdev=2582.19 00:31:59.805 lat (usec): min=10431, max=63158, avg=33019.06, stdev=2582.31 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[27132], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[35914], 99.90th=[63177], 99.95th=[63177], 00:31:59.805 | 99.99th=[63177] 00:31:59.805 bw ( KiB/s): min= 1779, max= 2048, per=4.04%, avg=1919.32, stdev=45.08, samples=19 00:31:59.805 iops : min= 444, max= 512, avg=479.79, stdev=11.40, samples=19 00:31:59.805 lat (msec) : 20=0.75%, 50=98.92%, 100=0.33% 00:31:59.805 cpu : usr=98.54%, sys=0.97%, ctx=149, majf=0, minf=56 00:31:59.805 IO depths : 1=5.1%, 2=11.4%, 4=25.0%, 8=51.1%, 16=7.4%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename1: (groupid=0, jobs=1): err= 0: pid=1563071: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=550, BW=2200KiB/s (2253kB/s)(21.5MiB/10006msec) 00:31:59.805 slat (nsec): min=5594, max=82553, avg=7999.85, stdev=4453.09 00:31:59.805 clat (usec): min=4706, max=35637, avg=29016.38, stdev=5553.85 00:31:59.805 lat (usec): min=4724, max=35645, avg=29024.38, stdev=5554.20 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[19006], 5.00th=[20579], 10.00th=[21365], 20.00th=[22676], 00:31:59.805 | 30.00th=[23987], 40.00th=[32375], 50.00th=[32637], 60.00th=[32900], 00:31:59.805 | 70.00th=[32900], 80.00th=[33162], 90.00th=[33162], 95.00th=[33424], 00:31:59.805 | 99.00th=[33817], 99.50th=[34341], 99.90th=[35390], 99.95th=[35390], 00:31:59.805 | 99.99th=[35390] 00:31:59.805 bw ( KiB/s): min= 1920, max= 2432, per=4.59%, avg=2181.68, stdev=196.86, samples=19 00:31:59.805 iops : min= 480, max= 608, avg=545.26, stdev=49.09, samples=19 00:31:59.805 lat (msec) : 10=0.84%, 20=1.42%, 50=97.75% 00:31:59.805 cpu : usr=99.00%, sys=0.72%, ctx=5, majf=0, minf=47 00:31:59.805 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=5504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563072: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=489, BW=1957KiB/s (2004kB/s)(19.1MiB/10005msec) 00:31:59.805 slat (nsec): min=5605, max=84222, avg=8739.99, stdev=5797.61 00:31:59.805 clat (usec): min=5800, max=35687, avg=32617.99, stdev=2987.29 00:31:59.805 lat (usec): min=5812, max=35695, avg=32626.73, stdev=2985.92 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[17695], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:31:59.805 | 99.99th=[35914] 00:31:59.805 bw ( KiB/s): min= 1916, max= 2308, per=4.12%, avg=1959.79, stdev=97.09, samples=19 00:31:59.805 iops : min= 479, max= 577, avg=489.95, stdev=24.27, samples=19 00:31:59.805 lat (msec) : 10=0.65%, 20=0.80%, 50=98.55% 00:31:59.805 cpu : usr=98.88%, sys=0.76%, ctx=111, majf=0, minf=48 00:31:59.805 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563073: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10008msec) 00:31:59.805 slat (nsec): min=5592, max=75646, avg=19335.02, stdev=11761.27 00:31:59.805 clat (usec): min=13019, max=51410, avg=32950.89, stdev=1736.93 00:31:59.805 lat (usec): min=13025, max=51431, avg=32970.23, stdev=1737.31 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32637], 50.00th=[32900], 60.00th=[32900], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[34866], 99.90th=[51119], 99.95th=[51643], 00:31:59.805 | 99.99th=[51643] 00:31:59.805 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.53, stdev=51.83, samples=19 00:31:59.805 iops : min= 448, max= 512, avg=481.63, stdev=12.96, samples=19 00:31:59.805 lat (msec) : 20=0.33%, 50=99.34%, 100=0.33% 00:31:59.805 cpu : usr=98.07%, sys=1.15%, ctx=42, majf=0, minf=42 00:31:59.805 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563074: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=482, BW=1932KiB/s (1978kB/s)(18.9MiB/10005msec) 00:31:59.805 slat (nsec): min=5595, max=64161, avg=14161.11, stdev=9752.24 00:31:59.805 clat (usec): min=10266, max=62966, avg=32998.58, stdev=2571.42 00:31:59.805 lat (usec): min=10283, max=63014, avg=33012.74, stdev=2571.40 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[27132], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[35914], 99.90th=[63177], 99.95th=[63177], 00:31:59.805 | 99.99th=[63177] 00:31:59.805 bw ( KiB/s): min= 1776, max= 2048, per=4.04%, avg=1919.37, stdev=45.59, samples=19 00:31:59.805 iops : min= 444, max= 512, avg=479.84, stdev=11.40, samples=19 00:31:59.805 lat (msec) : 20=0.70%, 50=98.88%, 100=0.41% 00:31:59.805 cpu : usr=98.07%, sys=1.16%, ctx=44, majf=0, minf=40 00:31:59.805 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563075: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10015msec) 00:31:59.805 slat (nsec): min=5601, max=70695, avg=15242.97, stdev=12171.32 00:31:59.805 clat (usec): min=23811, max=55260, avg=33038.40, stdev=1242.45 00:31:59.805 lat (usec): min=23819, max=55282, avg=33053.64, stdev=1241.54 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[34866], 99.90th=[46400], 99.95th=[46400], 00:31:59.805 | 99.99th=[55313] 00:31:59.805 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.05, stdev=51.23, samples=19 00:31:59.805 iops : min= 448, max= 512, avg=481.47, stdev=12.71, samples=19 00:31:59.805 lat (msec) : 50=99.96%, 100=0.04% 00:31:59.805 cpu : usr=99.14%, sys=0.55%, ctx=10, majf=0, minf=41 00:31:59.805 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563077: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=482, BW=1931KiB/s (1978kB/s)(18.9MiB/10007msec) 00:31:59.805 slat (nsec): min=5651, max=69621, avg=18620.41, stdev=11725.13 00:31:59.805 clat (usec): min=20232, max=43062, avg=32974.29, stdev=1026.42 00:31:59.805 lat (usec): min=20241, max=43093, avg=32992.91, stdev=1026.28 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[31589], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[35390], 99.90th=[35914], 99.95th=[35914], 00:31:59.805 | 99.99th=[43254] 00:31:59.805 bw ( KiB/s): min= 1916, max= 2048, per=4.07%, avg=1932.58, stdev=39.83, samples=19 00:31:59.805 iops : min= 479, max= 512, avg=483.11, stdev= 9.84, samples=19 00:31:59.805 lat (msec) : 50=100.00% 00:31:59.805 cpu : usr=99.04%, sys=0.68%, ctx=12, majf=0, minf=36 00:31:59.805 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563078: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=481, BW=1928KiB/s (1974kB/s)(18.8MiB/10006msec) 00:31:59.805 slat (nsec): min=5555, max=62293, avg=12074.11, stdev=9664.16 00:31:59.805 clat (usec): min=10436, max=64365, avg=33096.99, stdev=2410.30 00:31:59.805 lat (usec): min=10442, max=64381, avg=33109.06, stdev=2410.01 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[31327], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32900], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[35390], 99.50th=[36439], 99.90th=[64226], 99.95th=[64226], 00:31:59.805 | 99.99th=[64226] 00:31:59.805 bw ( KiB/s): min= 1792, max= 2048, per=4.04%, avg=1919.16, stdev=43.03, samples=19 00:31:59.805 iops : min= 448, max= 512, avg=479.79, stdev=10.76, samples=19 00:31:59.805 lat (msec) : 20=0.41%, 50=99.21%, 100=0.37% 00:31:59.805 cpu : usr=97.56%, sys=1.33%, ctx=46, majf=0, minf=35 00:31:59.805 IO depths : 1=4.0%, 2=10.2%, 4=24.9%, 8=52.4%, 16=8.5%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4822,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563079: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=482, BW=1930KiB/s (1976kB/s)(18.9MiB/10014msec) 00:31:59.805 slat (nsec): min=5673, max=65021, avg=17810.87, stdev=11191.11 00:31:59.805 clat (usec): min=23686, max=45740, avg=33010.80, stdev=1163.39 00:31:59.805 lat (usec): min=23691, max=45762, avg=33028.61, stdev=1162.44 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[31851], 5.00th=[32375], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[32900], 00:31:59.805 | 70.00th=[33162], 80.00th=[33424], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[34866], 99.90th=[45876], 99.95th=[45876], 00:31:59.805 | 99.99th=[45876] 00:31:59.805 bw ( KiB/s): min= 1792, max= 2048, per=4.05%, avg=1926.05, stdev=51.23, samples=19 00:31:59.805 iops : min= 448, max= 512, avg=481.47, stdev=12.71, samples=19 00:31:59.805 lat (msec) : 50=100.00% 00:31:59.805 cpu : usr=98.41%, sys=0.89%, ctx=30, majf=0, minf=39 00:31:59.805 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4832,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 filename2: (groupid=0, jobs=1): err= 0: pid=1563080: Mon Jul 15 20:45:50 2024 00:31:59.805 read: IOPS=484, BW=1936KiB/s (1983kB/s)(18.9MiB/10014msec) 00:31:59.805 slat (nsec): min=5602, max=53614, avg=10293.59, stdev=6336.84 00:31:59.805 clat (usec): min=19980, max=35610, avg=32965.18, stdev=1326.58 00:31:59.805 lat (usec): min=19990, max=35616, avg=32975.47, stdev=1326.17 00:31:59.805 clat percentiles (usec): 00:31:59.805 | 1.00th=[26870], 5.00th=[32113], 10.00th=[32375], 20.00th=[32637], 00:31:59.805 | 30.00th=[32637], 40.00th=[32900], 50.00th=[32900], 60.00th=[33162], 00:31:59.805 | 70.00th=[33162], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:31:59.805 | 99.00th=[34866], 99.50th=[34866], 99.90th=[35390], 99.95th=[35390], 00:31:59.805 | 99.99th=[35390] 00:31:59.805 bw ( KiB/s): min= 1916, max= 2048, per=4.07%, avg=1932.58, stdev=39.83, samples=19 00:31:59.805 iops : min= 479, max= 512, avg=483.11, stdev= 9.84, samples=19 00:31:59.805 lat (msec) : 20=0.04%, 50=99.96% 00:31:59.805 cpu : usr=98.89%, sys=0.76%, ctx=73, majf=0, minf=56 00:31:59.805 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:31:59.805 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:59.805 issued rwts: total=4848,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:59.805 latency : target=0, window=0, percentile=100.00%, depth=16 00:31:59.805 00:31:59.805 Run status group 0 (all jobs): 00:31:59.805 READ: bw=46.4MiB/s (48.7MB/s), 1928KiB/s-2834KiB/s (1974kB/s-2902kB/s), io=465MiB (488MB), run=10003-10024msec 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 bdev_null0 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 [2024-07-15 20:45:51.069995] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 bdev_null1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:59.806 { 00:31:59.806 "params": { 00:31:59.806 "name": "Nvme$subsystem", 00:31:59.806 "trtype": "$TEST_TRANSPORT", 00:31:59.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:59.806 "adrfam": "ipv4", 00:31:59.806 "trsvcid": "$NVMF_PORT", 00:31:59.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:59.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:59.806 "hdgst": ${hdgst:-false}, 00:31:59.806 "ddgst": ${ddgst:-false} 00:31:59.806 }, 00:31:59.806 "method": "bdev_nvme_attach_controller" 00:31:59.806 } 00:31:59.806 EOF 00:31:59.806 )") 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:31:59.806 { 00:31:59.806 "params": { 00:31:59.806 "name": "Nvme$subsystem", 00:31:59.806 "trtype": "$TEST_TRANSPORT", 00:31:59.806 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:59.806 "adrfam": "ipv4", 00:31:59.806 "trsvcid": "$NVMF_PORT", 00:31:59.806 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:59.806 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:59.806 "hdgst": ${hdgst:-false}, 00:31:59.806 "ddgst": ${ddgst:-false} 00:31:59.806 }, 00:31:59.806 "method": "bdev_nvme_attach_controller" 00:31:59.806 } 00:31:59.806 EOF 00:31:59.806 )") 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:31:59.806 "params": { 00:31:59.806 "name": "Nvme0", 00:31:59.806 "trtype": "tcp", 00:31:59.806 "traddr": "10.0.0.2", 00:31:59.806 "adrfam": "ipv4", 00:31:59.806 "trsvcid": "4420", 00:31:59.806 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:59.806 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:59.806 "hdgst": false, 00:31:59.806 "ddgst": false 00:31:59.806 }, 00:31:59.806 "method": "bdev_nvme_attach_controller" 00:31:59.806 },{ 00:31:59.806 "params": { 00:31:59.806 "name": "Nvme1", 00:31:59.806 "trtype": "tcp", 00:31:59.806 "traddr": "10.0.0.2", 00:31:59.806 "adrfam": "ipv4", 00:31:59.806 "trsvcid": "4420", 00:31:59.806 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:59.806 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:59.806 "hdgst": false, 00:31:59.806 "ddgst": false 00:31:59.806 }, 00:31:59.806 "method": "bdev_nvme_attach_controller" 00:31:59.806 }' 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:31:59.806 20:45:51 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:59.806 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:59.806 ... 00:31:59.806 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:31:59.806 ... 00:31:59.806 fio-3.35 00:31:59.806 Starting 4 threads 00:31:59.806 EAL: No free 2048 kB hugepages reported on node 1 00:32:05.078 00:32:05.078 filename0: (groupid=0, jobs=1): err= 0: pid=1565456: Mon Jul 15 20:45:57 2024 00:32:05.078 read: IOPS=2073, BW=16.2MiB/s (17.0MB/s)(81.0MiB/5004msec) 00:32:05.078 slat (nsec): min=5431, max=33093, avg=6059.06, stdev=1853.51 00:32:05.078 clat (usec): min=1976, max=47771, avg=3841.67, stdev=1378.33 00:32:05.078 lat (usec): min=1998, max=47805, avg=3847.73, stdev=1378.56 00:32:05.078 clat percentiles (usec): 00:32:05.078 | 1.00th=[ 2704], 5.00th=[ 3032], 10.00th=[ 3228], 20.00th=[ 3392], 00:32:05.078 | 30.00th=[ 3490], 40.00th=[ 3589], 50.00th=[ 3654], 60.00th=[ 3752], 00:32:05.078 | 70.00th=[ 3818], 80.00th=[ 4047], 90.00th=[ 4883], 95.00th=[ 5407], 00:32:05.078 | 99.00th=[ 5800], 99.50th=[ 5997], 99.90th=[ 6521], 99.95th=[47449], 00:32:05.078 | 99.99th=[47973] 00:32:05.078 bw ( KiB/s): min=15200, max=17072, per=24.58%, avg=16593.60, stdev=549.38, samples=10 00:32:05.078 iops : min= 1900, max= 2134, avg=2074.20, stdev=68.67, samples=10 00:32:05.078 lat (msec) : 2=0.02%, 4=77.60%, 10=22.31%, 50=0.08% 00:32:05.078 cpu : usr=97.42%, sys=2.36%, ctx=12, majf=0, minf=45 00:32:05.078 IO depths : 1=0.1%, 2=0.4%, 4=71.7%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.078 complete : 0=0.0%, 4=93.0%, 8=7.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.078 issued rwts: total=10374,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.078 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.078 filename0: (groupid=0, jobs=1): err= 0: pid=1565457: Mon Jul 15 20:45:57 2024 00:32:05.078 read: IOPS=2076, BW=16.2MiB/s (17.0MB/s)(81.2MiB/5002msec) 00:32:05.078 slat (nsec): min=5421, max=32954, avg=5930.69, stdev=1512.92 00:32:05.078 clat (usec): min=1329, max=45970, avg=3834.86, stdev=1281.81 00:32:05.078 lat (usec): min=1335, max=46003, avg=3840.79, stdev=1282.03 00:32:05.078 clat percentiles (usec): 00:32:05.078 | 1.00th=[ 2769], 5.00th=[ 3163], 10.00th=[ 3326], 20.00th=[ 3458], 00:32:05.078 | 30.00th=[ 3556], 40.00th=[ 3654], 50.00th=[ 3720], 60.00th=[ 3785], 00:32:05.078 | 70.00th=[ 3916], 80.00th=[ 4080], 90.00th=[ 4424], 95.00th=[ 4883], 00:32:05.078 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 6718], 99.95th=[45876], 00:32:05.078 | 99.99th=[45876] 00:32:05.078 bw ( KiB/s): min=15104, max=17248, per=24.57%, avg=16583.11, stdev=603.25, samples=9 00:32:05.078 iops : min= 1888, max= 2156, avg=2072.89, stdev=75.41, samples=9 00:32:05.078 lat (msec) : 2=0.20%, 4=75.51%, 10=24.21%, 50=0.08% 00:32:05.078 cpu : usr=96.90%, sys=2.82%, ctx=14, majf=0, minf=117 00:32:05.078 IO depths : 1=0.3%, 2=1.3%, 4=70.8%, 8=27.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.078 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.078 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.078 issued rwts: total=10389,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.078 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.078 filename1: (groupid=0, jobs=1): err= 0: pid=1565458: Mon Jul 15 20:45:57 2024 00:32:05.078 read: IOPS=2062, BW=16.1MiB/s (16.9MB/s)(80.6MiB/5002msec) 00:32:05.078 slat (nsec): min=5427, max=40642, avg=6178.63, stdev=1936.64 00:32:05.078 clat (usec): min=1442, max=7080, avg=3860.50, stdev=731.14 00:32:05.078 lat (usec): min=1447, max=7086, avg=3866.68, stdev=730.94 00:32:05.078 clat percentiles (usec): 00:32:05.078 | 1.00th=[ 2474], 5.00th=[ 3097], 10.00th=[ 3228], 20.00th=[ 3425], 00:32:05.078 | 30.00th=[ 3523], 40.00th=[ 3621], 50.00th=[ 3654], 60.00th=[ 3720], 00:32:05.078 | 70.00th=[ 3785], 80.00th=[ 4080], 90.00th=[ 5211], 95.00th=[ 5473], 00:32:05.078 | 99.00th=[ 5932], 99.50th=[ 6063], 99.90th=[ 6456], 99.95th=[ 6521], 00:32:05.078 | 99.99th=[ 7046] 00:32:05.078 bw ( KiB/s): min=15872, max=16881, per=24.43%, avg=16487.22, stdev=302.49, samples=9 00:32:05.078 iops : min= 1984, max= 2110, avg=2060.89, stdev=37.79, samples=9 00:32:05.078 lat (msec) : 2=0.27%, 4=77.30%, 10=22.42% 00:32:05.079 cpu : usr=97.28%, sys=2.50%, ctx=9, majf=0, minf=89 00:32:05.079 IO depths : 1=0.1%, 2=0.3%, 4=72.2%, 8=27.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.079 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.079 issued rwts: total=10319,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.079 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.079 filename1: (groupid=0, jobs=1): err= 0: pid=1565459: Mon Jul 15 20:45:57 2024 00:32:05.079 read: IOPS=2226, BW=17.4MiB/s (18.2MB/s)(87.0MiB/5002msec) 00:32:05.079 slat (nsec): min=5422, max=36667, avg=6042.21, stdev=1911.08 00:32:05.079 clat (usec): min=619, max=6137, avg=3575.97, stdev=522.75 00:32:05.079 lat (usec): min=630, max=6143, avg=3582.01, stdev=522.46 00:32:05.079 clat percentiles (usec): 00:32:05.079 | 1.00th=[ 2040], 5.00th=[ 2769], 10.00th=[ 2966], 20.00th=[ 3228], 00:32:05.079 | 30.00th=[ 3425], 40.00th=[ 3523], 50.00th=[ 3589], 60.00th=[ 3687], 00:32:05.079 | 70.00th=[ 3752], 80.00th=[ 3851], 90.00th=[ 4113], 95.00th=[ 4424], 00:32:05.079 | 99.00th=[ 5145], 99.50th=[ 5407], 99.90th=[ 5866], 99.95th=[ 5932], 00:32:05.079 | 99.99th=[ 6128] 00:32:05.079 bw ( KiB/s): min=17312, max=19305, per=26.42%, avg=17833.89, stdev=627.82, samples=9 00:32:05.079 iops : min= 2164, max= 2413, avg=2229.22, stdev=78.44, samples=9 00:32:05.079 lat (usec) : 750=0.02% 00:32:05.079 lat (msec) : 2=0.92%, 4=84.34%, 10=14.72% 00:32:05.079 cpu : usr=96.88%, sys=2.88%, ctx=8, majf=0, minf=63 00:32:05.079 IO depths : 1=0.1%, 2=1.8%, 4=70.4%, 8=27.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:05.079 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.079 complete : 0=0.0%, 4=92.6%, 8=7.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:05.079 issued rwts: total=11138,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:05.079 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:05.079 00:32:05.079 Run status group 0 (all jobs): 00:32:05.079 READ: bw=65.9MiB/s (69.1MB/s), 16.1MiB/s-17.4MiB/s (16.9MB/s-18.2MB/s), io=330MiB (346MB), run=5002-5004msec 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.079 00:32:05.079 real 0m24.437s 00:32:05.079 user 5m17.782s 00:32:05.079 sys 0m4.257s 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:05.079 20:45:57 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:32:05.079 ************************************ 00:32:05.079 END TEST fio_dif_rand_params 00:32:05.079 ************************************ 00:32:05.339 20:45:57 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:05.339 20:45:57 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:05.339 20:45:57 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:05.339 20:45:57 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:05.339 20:45:57 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:05.339 ************************************ 00:32:05.339 START TEST fio_dif_digest 00:32:05.339 ************************************ 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.339 bdev_null0 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:05.339 [2024-07-15 20:45:57.576998] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:32:05.339 { 00:32:05.339 "params": { 00:32:05.339 "name": "Nvme$subsystem", 00:32:05.339 "trtype": "$TEST_TRANSPORT", 00:32:05.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:05.339 "adrfam": "ipv4", 00:32:05.339 "trsvcid": "$NVMF_PORT", 00:32:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:05.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:05.339 "hdgst": ${hdgst:-false}, 00:32:05.339 "ddgst": ${ddgst:-false} 00:32:05.339 }, 00:32:05.339 "method": "bdev_nvme_attach_controller" 00:32:05.339 } 00:32:05.339 EOF 00:32:05.339 )") 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:32:05.339 "params": { 00:32:05.339 "name": "Nvme0", 00:32:05.339 "trtype": "tcp", 00:32:05.339 "traddr": "10.0.0.2", 00:32:05.339 "adrfam": "ipv4", 00:32:05.339 "trsvcid": "4420", 00:32:05.339 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:05.339 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:05.339 "hdgst": true, 00:32:05.339 "ddgst": true 00:32:05.339 }, 00:32:05.339 "method": "bdev_nvme_attach_controller" 00:32:05.339 }' 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:32:05.339 20:45:57 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:05.904 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:05.904 ... 00:32:05.904 fio-3.35 00:32:05.904 Starting 3 threads 00:32:05.904 EAL: No free 2048 kB hugepages reported on node 1 00:32:18.177 00:32:18.177 filename0: (groupid=0, jobs=1): err= 0: pid=1566822: Mon Jul 15 20:46:08 2024 00:32:18.177 read: IOPS=245, BW=30.6MiB/s (32.1MB/s)(308MiB/10050msec) 00:32:18.177 slat (nsec): min=5808, max=31773, avg=7479.13, stdev=1649.63 00:32:18.177 clat (usec): min=6797, max=53327, avg=12208.88, stdev=1510.02 00:32:18.177 lat (usec): min=6804, max=53333, avg=12216.36, stdev=1510.07 00:32:18.177 clat percentiles (usec): 00:32:18.177 | 1.00th=[ 8717], 5.00th=[10552], 10.00th=[10945], 20.00th=[11469], 00:32:18.177 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12256], 60.00th=[12387], 00:32:18.177 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:32:18.177 | 99.00th=[14353], 99.50th=[14615], 99.90th=[15270], 99.95th=[49021], 00:32:18.177 | 99.99th=[53216] 00:32:18.177 bw ( KiB/s): min=30208, max=33536, per=37.61%, avg=31513.60, stdev=909.47, samples=20 00:32:18.177 iops : min= 236, max= 262, avg=246.20, stdev= 7.11, samples=20 00:32:18.177 lat (msec) : 10=3.00%, 20=96.92%, 50=0.04%, 100=0.04% 00:32:18.177 cpu : usr=96.46%, sys=3.29%, ctx=23, majf=0, minf=110 00:32:18.177 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.177 issued rwts: total=2464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.177 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:18.177 filename0: (groupid=0, jobs=1): err= 0: pid=1566823: Mon Jul 15 20:46:08 2024 00:32:18.177 read: IOPS=221, BW=27.7MiB/s (29.0MB/s)(278MiB/10046msec) 00:32:18.177 slat (nsec): min=6221, max=32486, avg=9781.36, stdev=2140.08 00:32:18.177 clat (usec): min=8704, max=55620, avg=13507.34, stdev=2213.00 00:32:18.177 lat (usec): min=8714, max=55629, avg=13517.13, stdev=2212.85 00:32:18.177 clat percentiles (usec): 00:32:18.177 | 1.00th=[10028], 5.00th=[11600], 10.00th=[11994], 20.00th=[12649], 00:32:18.177 | 30.00th=[12911], 40.00th=[13173], 50.00th=[13435], 60.00th=[13698], 00:32:18.177 | 70.00th=[13960], 80.00th=[14353], 90.00th=[14746], 95.00th=[15139], 00:32:18.177 | 99.00th=[16057], 99.50th=[16581], 99.90th=[54789], 99.95th=[55313], 00:32:18.177 | 99.99th=[55837] 00:32:18.177 bw ( KiB/s): min=26880, max=30208, per=33.97%, avg=28467.20, stdev=833.06, samples=20 00:32:18.177 iops : min= 210, max= 236, avg=222.40, stdev= 6.51, samples=20 00:32:18.177 lat (msec) : 10=1.08%, 20=98.70%, 50=0.04%, 100=0.18% 00:32:18.177 cpu : usr=91.92%, sys=6.02%, ctx=1497, majf=0, minf=73 00:32:18.177 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.177 issued rwts: total=2226,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.177 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:18.177 filename0: (groupid=0, jobs=1): err= 0: pid=1566824: Mon Jul 15 20:46:08 2024 00:32:18.177 read: IOPS=187, BW=23.5MiB/s (24.6MB/s)(236MiB/10048msec) 00:32:18.177 slat (nsec): min=5690, max=40836, avg=6541.86, stdev=1141.71 00:32:18.177 clat (usec): min=10119, max=58384, avg=15925.08, stdev=4031.32 00:32:18.177 lat (usec): min=10125, max=58390, avg=15931.62, stdev=4031.32 00:32:18.177 clat percentiles (usec): 00:32:18.177 | 1.00th=[12649], 5.00th=[13698], 10.00th=[14091], 20.00th=[14615], 00:32:18.177 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:32:18.177 | 70.00th=[16188], 80.00th=[16712], 90.00th=[17171], 95.00th=[17695], 00:32:18.177 | 99.00th=[19530], 99.50th=[55837], 99.90th=[58459], 99.95th=[58459], 00:32:18.177 | 99.99th=[58459] 00:32:18.177 bw ( KiB/s): min=22272, max=25600, per=28.82%, avg=24153.60, stdev=1024.93, samples=20 00:32:18.177 iops : min= 174, max= 200, avg=188.70, stdev= 8.01, samples=20 00:32:18.177 lat (msec) : 20=99.10%, 50=0.05%, 100=0.85% 00:32:18.177 cpu : usr=95.95%, sys=3.82%, ctx=24, majf=0, minf=203 00:32:18.177 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:18.177 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.177 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:18.177 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:18.177 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:18.177 00:32:18.177 Run status group 0 (all jobs): 00:32:18.177 READ: bw=81.8MiB/s (85.8MB/s), 23.5MiB/s-30.6MiB/s (24.6MB/s-32.1MB/s), io=822MiB (862MB), run=10046-10050msec 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.177 00:32:18.177 real 0m11.132s 00:32:18.177 user 0m41.286s 00:32:18.177 sys 0m1.631s 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:18.177 20:46:08 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:32:18.177 ************************************ 00:32:18.177 END TEST fio_dif_digest 00:32:18.177 ************************************ 00:32:18.177 20:46:08 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:32:18.177 20:46:08 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:18.177 20:46:08 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:18.177 rmmod nvme_tcp 00:32:18.177 rmmod nvme_fabrics 00:32:18.177 rmmod nvme_keyring 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:32:18.177 20:46:08 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1556293 ']' 00:32:18.178 20:46:08 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1556293 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1556293 ']' 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1556293 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1556293 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1556293' 00:32:18.178 killing process with pid 1556293 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1556293 00:32:18.178 20:46:08 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1556293 00:32:18.178 20:46:08 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:32:18.178 20:46:08 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:20.717 Waiting for block devices as requested 00:32:20.717 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:20.717 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:20.717 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:20.717 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:20.717 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:20.977 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:20.977 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:20.977 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:20.977 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:21.236 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:21.236 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:21.495 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:21.495 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:21.495 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:21.495 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:21.754 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:21.754 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:21.754 20:46:13 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:21.754 20:46:13 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:21.754 20:46:13 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:21.754 20:46:13 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:21.754 20:46:13 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:21.754 20:46:13 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:21.754 20:46:13 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.290 20:46:16 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:24.290 00:32:24.290 real 1m19.085s 00:32:24.290 user 8m1.916s 00:32:24.290 sys 0m21.506s 00:32:24.290 20:46:16 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:24.290 20:46:16 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:32:24.290 ************************************ 00:32:24.290 END TEST nvmf_dif 00:32:24.290 ************************************ 00:32:24.290 20:46:16 -- common/autotest_common.sh@1142 -- # return 0 00:32:24.290 20:46:16 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:24.290 20:46:16 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:24.290 20:46:16 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:24.290 20:46:16 -- common/autotest_common.sh@10 -- # set +x 00:32:24.290 ************************************ 00:32:24.290 START TEST nvmf_abort_qd_sizes 00:32:24.290 ************************************ 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:24.290 * Looking for test storage... 00:32:24.290 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:32:24.290 20:46:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.0 (0x8086 - 0x159b)' 00:32:32.418 Found 0000:31:00.0 (0x8086 - 0x159b) 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.418 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:31:00.1 (0x8086 - 0x159b)' 00:32:32.419 Found 0000:31:00.1 (0x8086 - 0x159b) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.0: cvl_0_0' 00:32:32.419 Found net devices under 0000:31:00.0: cvl_0_0 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:31:00.1: cvl_0_1' 00:32:32.419 Found net devices under 0000:31:00.1: cvl_0_1 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:32.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.606 ms 00:32:32.419 00:32:32.419 --- 10.0.0.2 ping statistics --- 00:32:32.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.419 rtt min/avg/max/mdev = 0.606/0.606/0.606/0.000 ms 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:32.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:32:32.419 00:32:32.419 --- 10.0.0.1 ping statistics --- 00:32:32.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.419 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:32:32.419 20:46:24 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:35.717 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:35.717 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:35.717 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:35.717 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:35.717 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:32:35.978 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:32:35.978 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:35.978 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:35.978 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:35.978 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:35.978 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:35.978 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:35.978 20:46:28 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1576999 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1576999 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1576999 ']' 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:36.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:36.238 20:46:28 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:36.238 [2024-07-15 20:46:28.389014] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:32:36.238 [2024-07-15 20:46:28.389048] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:36.238 EAL: No free 2048 kB hugepages reported on node 1 00:32:36.238 [2024-07-15 20:46:28.452957] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:36.238 [2024-07-15 20:46:28.520164] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:36.238 [2024-07-15 20:46:28.520195] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:36.238 [2024-07-15 20:46:28.520203] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:36.238 [2024-07-15 20:46:28.520210] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:36.238 [2024-07-15 20:46:28.520215] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:36.238 [2024-07-15 20:46:28.520338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:36.238 [2024-07-15 20:46:28.520575] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:36.238 [2024-07-15 20:46:28.520708] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:32:36.238 [2024-07-15 20:46:28.520712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:36.806 20:46:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:36.806 20:46:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:32:36.806 20:46:29 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:36.806 20:46:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:36.806 20:46:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:65:00.0 ]] 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:65:00.0 ]] 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:65:00.0 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:65:00.0 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:37.066 20:46:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:37.066 ************************************ 00:32:37.066 START TEST spdk_target_abort 00:32:37.066 ************************************ 00:32:37.066 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:32:37.066 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:37.066 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:65:00.0 -b spdk_target 00:32:37.066 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.066 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.325 spdk_targetn1 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.325 [2024-07-15 20:46:29.581330] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:37.325 [2024-07-15 20:46:29.621565] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:37.325 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:37.326 20:46:29 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:37.326 EAL: No free 2048 kB hugepages reported on node 1 00:32:37.638 [2024-07-15 20:46:29.736666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:191 nsid:1 lba:312 len:8 PRP1 0x2000078be000 PRP2 0x0 00:32:37.638 [2024-07-15 20:46:29.736693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:191 cdw0:0 sqhd:0028 p:1 m:0 dnr:0 00:32:37.638 [2024-07-15 20:46:29.738653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:440 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:37.638 [2024-07-15 20:46:29.738666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:003a p:1 m:0 dnr:0 00:32:37.638 [2024-07-15 20:46:29.752000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:832 len:8 PRP1 0x2000078c0000 PRP2 0x0 00:32:37.638 [2024-07-15 20:46:29.752015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:006a p:1 m:0 dnr:0 00:32:37.638 [2024-07-15 20:46:29.752960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:188 nsid:1 lba:888 len:8 PRP1 0x2000078c2000 PRP2 0x0 00:32:37.638 [2024-07-15 20:46:29.752972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:188 cdw0:0 sqhd:0070 p:1 m:0 dnr:0 00:32:37.638 [2024-07-15 20:46:29.790951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2248 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:32:37.638 [2024-07-15 20:46:29.790967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:37.638 [2024-07-15 20:46:29.795586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:190 nsid:1 lba:2344 len:8 PRP1 0x2000078c4000 PRP2 0x0 00:32:37.638 [2024-07-15 20:46:29.795599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:190 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:40.955 Initializing NVMe Controllers 00:32:40.956 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:40.956 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:40.956 Initialization complete. Launching workers. 00:32:40.956 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11793, failed: 6 00:32:40.956 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 3259, failed to submit 8540 00:32:40.956 success 735, unsuccess 2524, failed 0 00:32:40.956 20:46:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:40.956 20:46:32 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:40.956 EAL: No free 2048 kB hugepages reported on node 1 00:32:40.956 [2024-07-15 20:46:32.933393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:172 nsid:1 lba:216 len:8 PRP1 0x200007c50000 PRP2 0x0 00:32:40.956 [2024-07-15 20:46:32.933432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:172 cdw0:0 sqhd:002c p:1 m:0 dnr:0 00:32:40.956 [2024-07-15 20:46:33.029402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:183 nsid:1 lba:2544 len:8 PRP1 0x200007c5a000 PRP2 0x0 00:32:40.956 [2024-07-15 20:46:33.029427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:183 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:40.956 [2024-07-15 20:46:33.045353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:2904 len:8 PRP1 0x200007c58000 PRP2 0x0 00:32:40.956 [2024-07-15 20:46:33.045375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:40.956 [2024-07-15 20:46:33.077375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:4 cid:187 nsid:1 lba:3688 len:8 PRP1 0x200007c52000 PRP2 0x0 00:32:40.956 [2024-07-15 20:46:33.077397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:4 cid:187 cdw0:0 sqhd:00d1 p:0 m:0 dnr:0 00:32:44.255 Initializing NVMe Controllers 00:32:44.255 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:44.255 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:44.255 Initialization complete. Launching workers. 00:32:44.255 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8821, failed: 4 00:32:44.255 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1208, failed to submit 7617 00:32:44.255 success 376, unsuccess 832, failed 0 00:32:44.255 20:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:44.255 20:46:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:44.255 EAL: No free 2048 kB hugepages reported on node 1 00:32:45.639 [2024-07-15 20:46:37.581305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:2 cid:160 nsid:1 lba:157328 len:8 PRP1 0x2000078dc000 PRP2 0x0 00:32:45.639 [2024-07-15 20:46:37.581337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:2 cid:160 cdw0:0 sqhd:00fa p:1 m:0 dnr:0 00:32:47.023 Initializing NVMe Controllers 00:32:47.023 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:47.023 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:47.023 Initialization complete. Launching workers. 00:32:47.023 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 42327, failed: 1 00:32:47.023 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2621, failed to submit 39707 00:32:47.023 success 586, unsuccess 2035, failed 0 00:32:47.023 20:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:47.023 20:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.023 20:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:47.023 20:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.023 20:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:47.023 20:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.023 20:46:39 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1576999 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1576999 ']' 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1576999 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1576999 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1576999' 00:32:48.936 killing process with pid 1576999 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1576999 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1576999 00:32:48.936 00:32:48.936 real 0m12.015s 00:32:48.936 user 0m48.960s 00:32:48.936 sys 0m1.767s 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:48.936 20:46:41 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:32:48.936 ************************************ 00:32:48.936 END TEST spdk_target_abort 00:32:48.936 ************************************ 00:32:49.197 20:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:32:49.197 20:46:41 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:49.197 20:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:32:49.197 20:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:49.197 20:46:41 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:32:49.197 ************************************ 00:32:49.197 START TEST kernel_target_abort 00:32:49.197 ************************************ 00:32:49.197 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:32:49.197 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:49.197 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:49.198 20:46:41 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:53.400 Waiting for block devices as requested 00:32:53.400 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:53.400 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:32:53.400 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:32:53.661 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:32:53.661 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:32:53.661 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:32:53.661 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:32:53.920 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:32:53.920 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:32:53.920 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:53.920 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:53.921 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:53.921 No valid GPT data, bailing 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 --hostid=00539ede-7deb-ec11-9bc7-a4bf01928396 -a 10.0.0.1 -t tcp -s 4420 00:32:54.181 00:32:54.181 Discovery Log Number of Records 2, Generation counter 2 00:32:54.181 =====Discovery Log Entry 0====== 00:32:54.181 trtype: tcp 00:32:54.181 adrfam: ipv4 00:32:54.181 subtype: current discovery subsystem 00:32:54.181 treq: not specified, sq flow control disable supported 00:32:54.181 portid: 1 00:32:54.181 trsvcid: 4420 00:32:54.181 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:54.181 traddr: 10.0.0.1 00:32:54.181 eflags: none 00:32:54.181 sectype: none 00:32:54.181 =====Discovery Log Entry 1====== 00:32:54.181 trtype: tcp 00:32:54.181 adrfam: ipv4 00:32:54.181 subtype: nvme subsystem 00:32:54.181 treq: not specified, sq flow control disable supported 00:32:54.181 portid: 1 00:32:54.181 trsvcid: 4420 00:32:54.181 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:54.181 traddr: 10.0.0.1 00:32:54.181 eflags: none 00:32:54.181 sectype: none 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:54.181 20:46:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:54.181 EAL: No free 2048 kB hugepages reported on node 1 00:32:57.469 Initializing NVMe Controllers 00:32:57.469 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:57.469 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:57.469 Initialization complete. Launching workers. 00:32:57.469 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 58411, failed: 0 00:32:57.469 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 58411, failed to submit 0 00:32:57.469 success 0, unsuccess 58411, failed 0 00:32:57.469 20:46:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:57.469 20:46:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:57.469 EAL: No free 2048 kB hugepages reported on node 1 00:33:00.759 Initializing NVMe Controllers 00:33:00.759 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:00.759 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:00.759 Initialization complete. Launching workers. 00:33:00.759 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 100199, failed: 0 00:33:00.759 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 25254, failed to submit 74945 00:33:00.759 success 0, unsuccess 25254, failed 0 00:33:00.759 20:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:00.759 20:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:00.759 EAL: No free 2048 kB hugepages reported on node 1 00:33:03.298 Initializing NVMe Controllers 00:33:03.298 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:03.298 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:03.298 Initialization complete. Launching workers. 00:33:03.298 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 96563, failed: 0 00:33:03.298 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 24118, failed to submit 72445 00:33:03.298 success 0, unsuccess 24118, failed 0 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:03.298 20:46:55 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:07.496 0000:80:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:80:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:80:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:80:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:80:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:80:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:80:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:80:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.6 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.7 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.4 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.5 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.2 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.3 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.0 (8086 0b00): ioatdma -> vfio-pci 00:33:07.496 0000:00:01.1 (8086 0b00): ioatdma -> vfio-pci 00:33:09.411 0000:65:00.0 (144d a80a): nvme -> vfio-pci 00:33:09.411 00:33:09.411 real 0m20.021s 00:33:09.411 user 0m9.191s 00:33:09.411 sys 0m6.221s 00:33:09.411 20:47:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:09.411 20:47:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:33:09.411 ************************************ 00:33:09.411 END TEST kernel_target_abort 00:33:09.411 ************************************ 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:09.411 rmmod nvme_tcp 00:33:09.411 rmmod nvme_fabrics 00:33:09.411 rmmod nvme_keyring 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1576999 ']' 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1576999 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1576999 ']' 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1576999 00:33:09.411 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1576999) - No such process 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1576999 is not found' 00:33:09.411 Process with pid 1576999 is not found 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:33:09.411 20:47:01 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:33:13.602 Waiting for block devices as requested 00:33:13.602 0000:80:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:80:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:80:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:80:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:80:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:80:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:80:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:80:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:13.602 0000:65:00.0 (144d a80a): vfio-pci -> nvme 00:33:13.862 0000:00:01.6 (8086 0b00): vfio-pci -> ioatdma 00:33:13.862 0000:00:01.7 (8086 0b00): vfio-pci -> ioatdma 00:33:13.862 0000:00:01.4 (8086 0b00): vfio-pci -> ioatdma 00:33:13.862 0000:00:01.5 (8086 0b00): vfio-pci -> ioatdma 00:33:14.121 0000:00:01.2 (8086 0b00): vfio-pci -> ioatdma 00:33:14.121 0000:00:01.3 (8086 0b00): vfio-pci -> ioatdma 00:33:14.121 0000:00:01.0 (8086 0b00): vfio-pci -> ioatdma 00:33:14.121 0000:00:01.1 (8086 0b00): vfio-pci -> ioatdma 00:33:14.121 20:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:14.121 20:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:14.121 20:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:14.121 20:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:14.121 20:47:06 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:14.381 20:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:14.382 20:47:06 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:16.287 20:47:08 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:16.287 00:33:16.287 real 0m52.430s 00:33:16.287 user 1m3.750s 00:33:16.287 sys 0m19.409s 00:33:16.287 20:47:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:16.287 20:47:08 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:33:16.287 ************************************ 00:33:16.287 END TEST nvmf_abort_qd_sizes 00:33:16.287 ************************************ 00:33:16.287 20:47:08 -- common/autotest_common.sh@1142 -- # return 0 00:33:16.287 20:47:08 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:16.287 20:47:08 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:16.287 20:47:08 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:16.287 20:47:08 -- common/autotest_common.sh@10 -- # set +x 00:33:16.287 ************************************ 00:33:16.287 START TEST keyring_file 00:33:16.287 ************************************ 00:33:16.287 20:47:08 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:33:16.547 * Looking for test storage... 00:33:16.547 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:16.547 20:47:08 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:16.547 20:47:08 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:16.547 20:47:08 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:16.547 20:47:08 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.547 20:47:08 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.547 20:47:08 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.547 20:47:08 keyring_file -- paths/export.sh@5 -- # export PATH 00:33:16.547 20:47:08 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@47 -- # : 0 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.jNPlxL7rCP 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:16.547 20:47:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.jNPlxL7rCP 00:33:16.547 20:47:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.jNPlxL7rCP 00:33:16.547 20:47:08 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.jNPlxL7rCP 00:33:16.548 20:47:08 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@17 -- # name=key1 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.qRtnUkkZuS 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:16.548 20:47:08 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:16.548 20:47:08 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:16.548 20:47:08 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:16.548 20:47:08 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:16.548 20:47:08 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:16.548 20:47:08 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.qRtnUkkZuS 00:33:16.548 20:47:08 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.qRtnUkkZuS 00:33:16.548 20:47:08 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.qRtnUkkZuS 00:33:16.548 20:47:08 keyring_file -- keyring/file.sh@30 -- # tgtpid=1587675 00:33:16.548 20:47:08 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1587675 00:33:16.548 20:47:08 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:16.548 20:47:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1587675 ']' 00:33:16.548 20:47:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:16.548 20:47:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:16.548 20:47:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:16.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:16.548 20:47:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:16.548 20:47:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:16.807 [2024-07-15 20:47:08.966156] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:33:16.807 [2024-07-15 20:47:08.966257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587675 ] 00:33:16.807 EAL: No free 2048 kB hugepages reported on node 1 00:33:16.807 [2024-07-15 20:47:09.037433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:16.807 [2024-07-15 20:47:09.112902] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:17.376 20:47:09 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.376 [2024-07-15 20:47:09.698157] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:17.376 null0 00:33:17.376 [2024-07-15 20:47:09.730202] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:17.376 [2024-07-15 20:47:09.730449] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:17.376 [2024-07-15 20:47:09.738216] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.376 20:47:09 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.376 20:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.376 [2024-07-15 20:47:09.750248] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:33:17.376 request: 00:33:17.376 { 00:33:17.376 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:17.376 "secure_channel": false, 00:33:17.376 "listen_address": { 00:33:17.376 "trtype": "tcp", 00:33:17.376 "traddr": "127.0.0.1", 00:33:17.376 "trsvcid": "4420" 00:33:17.376 }, 00:33:17.635 "method": "nvmf_subsystem_add_listener", 00:33:17.635 "req_id": 1 00:33:17.635 } 00:33:17.635 Got JSON-RPC error response 00:33:17.635 response: 00:33:17.635 { 00:33:17.635 "code": -32602, 00:33:17.635 "message": "Invalid parameters" 00:33:17.635 } 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:17.635 20:47:09 keyring_file -- keyring/file.sh@46 -- # bperfpid=1587888 00:33:17.635 20:47:09 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1587888 /var/tmp/bperf.sock 00:33:17.635 20:47:09 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1587888 ']' 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:17.635 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:17.635 20:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:17.635 [2024-07-15 20:47:09.812154] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:33:17.635 [2024-07-15 20:47:09.812203] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1587888 ] 00:33:17.635 EAL: No free 2048 kB hugepages reported on node 1 00:33:17.635 [2024-07-15 20:47:09.895120] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:17.635 [2024-07-15 20:47:09.959146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:18.203 20:47:10 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:18.203 20:47:10 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:18.203 20:47:10 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:18.203 20:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:18.462 20:47:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qRtnUkkZuS 00:33:18.462 20:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qRtnUkkZuS 00:33:18.721 20:47:10 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:33:18.721 20:47:10 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:33:18.721 20:47:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.721 20:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.721 20:47:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.721 20:47:11 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.jNPlxL7rCP == \/\t\m\p\/\t\m\p\.\j\N\P\l\x\L\7\r\C\P ]] 00:33:18.721 20:47:11 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:33:18.721 20:47:11 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:33:18.721 20:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.721 20:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.721 20:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:18.980 20:47:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.qRtnUkkZuS == \/\t\m\p\/\t\m\p\.\q\R\t\n\U\k\k\Z\u\S ]] 00:33:18.980 20:47:11 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.980 20:47:11 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:18.980 20:47:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.980 20:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.239 20:47:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:19.239 20:47:11 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:19.239 20:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:19.519 [2024-07-15 20:47:11.644712] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:19.519 nvme0n1 00:33:19.519 20:47:11 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:33:19.519 20:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:19.519 20:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.519 20:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.519 20:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.519 20:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:19.840 20:47:11 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:19.840 20:47:11 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:33:19.840 20:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:19.840 20:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.840 20:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.840 20:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.840 20:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:19.840 20:47:12 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:19.840 20:47:12 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:19.840 Running I/O for 1 seconds... 00:33:20.811 00:33:20.811 Latency(us) 00:33:20.811 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:20.811 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:20.811 nvme0n1 : 1.01 11408.54 44.56 0.00 0.00 11182.80 5024.43 17913.17 00:33:20.811 =================================================================================================================== 00:33:20.811 Total : 11408.54 44.56 0.00 0.00 11182.80 5024.43 17913.17 00:33:20.811 0 00:33:20.811 20:47:13 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:20.811 20:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:21.071 20:47:13 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:33:21.071 20:47:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:21.071 20:47:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.071 20:47:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.071 20:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.071 20:47:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:21.330 20:47:13 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:21.330 20:47:13 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:33:21.330 20:47:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:21.330 20:47:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.330 20:47:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.330 20:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.330 20:47:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:21.330 20:47:13 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:21.330 20:47:13 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:21.330 20:47:13 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:21.330 20:47:13 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:21.330 20:47:13 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:21.330 20:47:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:21.330 20:47:13 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:21.330 20:47:13 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:21.330 20:47:13 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:21.330 20:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:21.590 [2024-07-15 20:47:13.767364] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:21.590 [2024-07-15 20:47:13.768066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2025020 (107): Transport endpoint is not connected 00:33:21.590 [2024-07-15 20:47:13.769062] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2025020 (9): Bad file descriptor 00:33:21.590 [2024-07-15 20:47:13.770068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:21.590 [2024-07-15 20:47:13.770074] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:21.590 [2024-07-15 20:47:13.770080] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:21.590 request: 00:33:21.590 { 00:33:21.590 "name": "nvme0", 00:33:21.590 "trtype": "tcp", 00:33:21.590 "traddr": "127.0.0.1", 00:33:21.590 "adrfam": "ipv4", 00:33:21.590 "trsvcid": "4420", 00:33:21.590 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:21.590 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:21.590 "prchk_reftag": false, 00:33:21.590 "prchk_guard": false, 00:33:21.590 "hdgst": false, 00:33:21.590 "ddgst": false, 00:33:21.590 "psk": "key1", 00:33:21.590 "method": "bdev_nvme_attach_controller", 00:33:21.590 "req_id": 1 00:33:21.590 } 00:33:21.590 Got JSON-RPC error response 00:33:21.590 response: 00:33:21.590 { 00:33:21.590 "code": -5, 00:33:21.590 "message": "Input/output error" 00:33:21.590 } 00:33:21.590 20:47:13 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:21.590 20:47:13 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:21.590 20:47:13 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:21.590 20:47:13 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:21.590 20:47:13 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:33:21.590 20:47:13 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:21.590 20:47:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.590 20:47:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.590 20:47:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:21.590 20:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.590 20:47:13 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:21.590 20:47:13 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:33:21.590 20:47:13 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:21.591 20:47:13 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.591 20:47:13 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.591 20:47:13 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.591 20:47:13 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:21.850 20:47:14 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:21.850 20:47:14 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:21.850 20:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:22.110 20:47:14 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:22.110 20:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:22.110 20:47:14 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:22.110 20:47:14 keyring_file -- keyring/file.sh@77 -- # jq length 00:33:22.110 20:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.370 20:47:14 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:22.370 20:47:14 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.jNPlxL7rCP 00:33:22.370 20:47:14 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:22.370 20:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:22.370 [2024-07-15 20:47:14.698747] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jNPlxL7rCP': 0100660 00:33:22.370 [2024-07-15 20:47:14.698766] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:22.370 request: 00:33:22.370 { 00:33:22.370 "name": "key0", 00:33:22.370 "path": "/tmp/tmp.jNPlxL7rCP", 00:33:22.370 "method": "keyring_file_add_key", 00:33:22.370 "req_id": 1 00:33:22.370 } 00:33:22.370 Got JSON-RPC error response 00:33:22.370 response: 00:33:22.370 { 00:33:22.370 "code": -1, 00:33:22.370 "message": "Operation not permitted" 00:33:22.370 } 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:22.370 20:47:14 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:22.370 20:47:14 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.jNPlxL7rCP 00:33:22.370 20:47:14 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:22.370 20:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.jNPlxL7rCP 00:33:22.630 20:47:14 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.jNPlxL7rCP 00:33:22.630 20:47:14 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:33:22.630 20:47:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:22.630 20:47:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:22.630 20:47:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:22.630 20:47:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:22.630 20:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.891 20:47:15 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:22.891 20:47:15 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.891 20:47:15 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:33:22.891 20:47:15 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.891 20:47:15 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:22.891 20:47:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:22.891 20:47:15 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:22.891 20:47:15 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:22.891 20:47:15 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.891 20:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:22.891 [2024-07-15 20:47:15.183980] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.jNPlxL7rCP': No such file or directory 00:33:22.891 [2024-07-15 20:47:15.183994] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:22.891 [2024-07-15 20:47:15.184010] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:22.891 [2024-07-15 20:47:15.184014] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:22.892 [2024-07-15 20:47:15.184019] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:22.892 request: 00:33:22.892 { 00:33:22.892 "name": "nvme0", 00:33:22.892 "trtype": "tcp", 00:33:22.892 "traddr": "127.0.0.1", 00:33:22.892 "adrfam": "ipv4", 00:33:22.892 "trsvcid": "4420", 00:33:22.892 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:22.892 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:22.892 "prchk_reftag": false, 00:33:22.892 "prchk_guard": false, 00:33:22.892 "hdgst": false, 00:33:22.892 "ddgst": false, 00:33:22.892 "psk": "key0", 00:33:22.892 "method": "bdev_nvme_attach_controller", 00:33:22.892 "req_id": 1 00:33:22.892 } 00:33:22.892 Got JSON-RPC error response 00:33:22.892 response: 00:33:22.892 { 00:33:22.892 "code": -19, 00:33:22.892 "message": "No such device" 00:33:22.892 } 00:33:22.892 20:47:15 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:33:22.892 20:47:15 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:22.892 20:47:15 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:22.892 20:47:15 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:22.892 20:47:15 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:22.892 20:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:23.152 20:47:15 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:23.152 20:47:15 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:33:23.152 20:47:15 keyring_file -- keyring/common.sh@17 -- # name=key0 00:33:23.152 20:47:15 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:23.152 20:47:15 keyring_file -- keyring/common.sh@17 -- # digest=0 00:33:23.152 20:47:15 keyring_file -- keyring/common.sh@18 -- # mktemp 00:33:23.152 20:47:15 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.memcDjuZwv 00:33:23.152 20:47:15 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:23.152 20:47:15 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:23.152 20:47:15 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:33:23.152 20:47:15 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:23.153 20:47:15 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:23.153 20:47:15 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:33:23.153 20:47:15 keyring_file -- nvmf/common.sh@705 -- # python - 00:33:23.153 20:47:15 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.memcDjuZwv 00:33:23.153 20:47:15 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.memcDjuZwv 00:33:23.153 20:47:15 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.memcDjuZwv 00:33:23.153 20:47:15 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.memcDjuZwv 00:33:23.153 20:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.memcDjuZwv 00:33:23.414 20:47:15 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.414 20:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.414 nvme0n1 00:33:23.414 20:47:15 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:33:23.414 20:47:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:23.414 20:47:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.414 20:47:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.414 20:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.414 20:47:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.674 20:47:15 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:23.674 20:47:15 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:23.674 20:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:23.934 20:47:16 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:33:23.935 20:47:16 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.935 20:47:16 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:23.935 20:47:16 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.935 20:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.194 20:47:16 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:24.194 20:47:16 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:24.194 20:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:24.194 20:47:16 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:24.194 20:47:16 keyring_file -- keyring/file.sh@104 -- # jq length 00:33:24.194 20:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.453 20:47:16 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:24.453 20:47:16 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.memcDjuZwv 00:33:24.453 20:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.memcDjuZwv 00:33:24.712 20:47:16 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.qRtnUkkZuS 00:33:24.712 20:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.qRtnUkkZuS 00:33:24.712 20:47:17 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:24.712 20:47:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:24.971 nvme0n1 00:33:24.971 20:47:17 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:24.971 20:47:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:25.231 20:47:17 keyring_file -- keyring/file.sh@112 -- # config='{ 00:33:25.231 "subsystems": [ 00:33:25.231 { 00:33:25.231 "subsystem": "keyring", 00:33:25.231 "config": [ 00:33:25.231 { 00:33:25.231 "method": "keyring_file_add_key", 00:33:25.231 "params": { 00:33:25.231 "name": "key0", 00:33:25.231 "path": "/tmp/tmp.memcDjuZwv" 00:33:25.231 } 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "method": "keyring_file_add_key", 00:33:25.231 "params": { 00:33:25.231 "name": "key1", 00:33:25.231 "path": "/tmp/tmp.qRtnUkkZuS" 00:33:25.231 } 00:33:25.231 } 00:33:25.231 ] 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "subsystem": "iobuf", 00:33:25.231 "config": [ 00:33:25.231 { 00:33:25.231 "method": "iobuf_set_options", 00:33:25.231 "params": { 00:33:25.231 "small_pool_count": 8192, 00:33:25.231 "large_pool_count": 1024, 00:33:25.231 "small_bufsize": 8192, 00:33:25.231 "large_bufsize": 135168 00:33:25.231 } 00:33:25.231 } 00:33:25.231 ] 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "subsystem": "sock", 00:33:25.231 "config": [ 00:33:25.231 { 00:33:25.231 "method": "sock_set_default_impl", 00:33:25.231 "params": { 00:33:25.231 "impl_name": "posix" 00:33:25.231 } 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "method": "sock_impl_set_options", 00:33:25.231 "params": { 00:33:25.231 "impl_name": "ssl", 00:33:25.231 "recv_buf_size": 4096, 00:33:25.231 "send_buf_size": 4096, 00:33:25.231 "enable_recv_pipe": true, 00:33:25.231 "enable_quickack": false, 00:33:25.231 "enable_placement_id": 0, 00:33:25.231 "enable_zerocopy_send_server": true, 00:33:25.231 "enable_zerocopy_send_client": false, 00:33:25.231 "zerocopy_threshold": 0, 00:33:25.231 "tls_version": 0, 00:33:25.231 "enable_ktls": false 00:33:25.231 } 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "method": "sock_impl_set_options", 00:33:25.231 "params": { 00:33:25.231 "impl_name": "posix", 00:33:25.231 "recv_buf_size": 2097152, 00:33:25.231 "send_buf_size": 2097152, 00:33:25.231 "enable_recv_pipe": true, 00:33:25.231 "enable_quickack": false, 00:33:25.231 "enable_placement_id": 0, 00:33:25.231 "enable_zerocopy_send_server": true, 00:33:25.231 "enable_zerocopy_send_client": false, 00:33:25.231 "zerocopy_threshold": 0, 00:33:25.231 "tls_version": 0, 00:33:25.231 "enable_ktls": false 00:33:25.231 } 00:33:25.231 } 00:33:25.231 ] 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "subsystem": "vmd", 00:33:25.231 "config": [] 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "subsystem": "accel", 00:33:25.231 "config": [ 00:33:25.231 { 00:33:25.231 "method": "accel_set_options", 00:33:25.231 "params": { 00:33:25.231 "small_cache_size": 128, 00:33:25.231 "large_cache_size": 16, 00:33:25.231 "task_count": 2048, 00:33:25.231 "sequence_count": 2048, 00:33:25.231 "buf_count": 2048 00:33:25.231 } 00:33:25.231 } 00:33:25.231 ] 00:33:25.231 }, 00:33:25.231 { 00:33:25.231 "subsystem": "bdev", 00:33:25.231 "config": [ 00:33:25.231 { 00:33:25.231 "method": "bdev_set_options", 00:33:25.231 "params": { 00:33:25.231 "bdev_io_pool_size": 65535, 00:33:25.231 "bdev_io_cache_size": 256, 00:33:25.231 "bdev_auto_examine": true, 00:33:25.231 "iobuf_small_cache_size": 128, 00:33:25.231 "iobuf_large_cache_size": 16 00:33:25.232 } 00:33:25.232 }, 00:33:25.232 { 00:33:25.232 "method": "bdev_raid_set_options", 00:33:25.232 "params": { 00:33:25.232 "process_window_size_kb": 1024 00:33:25.232 } 00:33:25.232 }, 00:33:25.232 { 00:33:25.232 "method": "bdev_iscsi_set_options", 00:33:25.232 "params": { 00:33:25.232 "timeout_sec": 30 00:33:25.232 } 00:33:25.232 }, 00:33:25.232 { 00:33:25.232 "method": "bdev_nvme_set_options", 00:33:25.232 "params": { 00:33:25.232 "action_on_timeout": "none", 00:33:25.232 "timeout_us": 0, 00:33:25.232 "timeout_admin_us": 0, 00:33:25.232 "keep_alive_timeout_ms": 10000, 00:33:25.232 "arbitration_burst": 0, 00:33:25.232 "low_priority_weight": 0, 00:33:25.232 "medium_priority_weight": 0, 00:33:25.232 "high_priority_weight": 0, 00:33:25.232 "nvme_adminq_poll_period_us": 10000, 00:33:25.232 "nvme_ioq_poll_period_us": 0, 00:33:25.232 "io_queue_requests": 512, 00:33:25.232 "delay_cmd_submit": true, 00:33:25.232 "transport_retry_count": 4, 00:33:25.232 "bdev_retry_count": 3, 00:33:25.232 "transport_ack_timeout": 0, 00:33:25.232 "ctrlr_loss_timeout_sec": 0, 00:33:25.232 "reconnect_delay_sec": 0, 00:33:25.232 "fast_io_fail_timeout_sec": 0, 00:33:25.232 "disable_auto_failback": false, 00:33:25.232 "generate_uuids": false, 00:33:25.232 "transport_tos": 0, 00:33:25.232 "nvme_error_stat": false, 00:33:25.232 "rdma_srq_size": 0, 00:33:25.232 "io_path_stat": false, 00:33:25.232 "allow_accel_sequence": false, 00:33:25.232 "rdma_max_cq_size": 0, 00:33:25.232 "rdma_cm_event_timeout_ms": 0, 00:33:25.232 "dhchap_digests": [ 00:33:25.232 "sha256", 00:33:25.232 "sha384", 00:33:25.232 "sha512" 00:33:25.232 ], 00:33:25.232 "dhchap_dhgroups": [ 00:33:25.232 "null", 00:33:25.232 "ffdhe2048", 00:33:25.232 "ffdhe3072", 00:33:25.232 "ffdhe4096", 00:33:25.232 "ffdhe6144", 00:33:25.232 "ffdhe8192" 00:33:25.232 ] 00:33:25.232 } 00:33:25.232 }, 00:33:25.232 { 00:33:25.232 "method": "bdev_nvme_attach_controller", 00:33:25.232 "params": { 00:33:25.232 "name": "nvme0", 00:33:25.232 "trtype": "TCP", 00:33:25.232 "adrfam": "IPv4", 00:33:25.232 "traddr": "127.0.0.1", 00:33:25.232 "trsvcid": "4420", 00:33:25.232 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:25.232 "prchk_reftag": false, 00:33:25.232 "prchk_guard": false, 00:33:25.232 "ctrlr_loss_timeout_sec": 0, 00:33:25.232 "reconnect_delay_sec": 0, 00:33:25.232 "fast_io_fail_timeout_sec": 0, 00:33:25.232 "psk": "key0", 00:33:25.232 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:25.232 "hdgst": false, 00:33:25.232 "ddgst": false 00:33:25.232 } 00:33:25.232 }, 00:33:25.232 { 00:33:25.232 "method": "bdev_nvme_set_hotplug", 00:33:25.232 "params": { 00:33:25.232 "period_us": 100000, 00:33:25.232 "enable": false 00:33:25.232 } 00:33:25.232 }, 00:33:25.232 { 00:33:25.232 "method": "bdev_wait_for_examine" 00:33:25.232 } 00:33:25.232 ] 00:33:25.232 }, 00:33:25.232 { 00:33:25.232 "subsystem": "nbd", 00:33:25.232 "config": [] 00:33:25.232 } 00:33:25.232 ] 00:33:25.232 }' 00:33:25.232 20:47:17 keyring_file -- keyring/file.sh@114 -- # killprocess 1587888 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1587888 ']' 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1587888 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1587888 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1587888' 00:33:25.232 killing process with pid 1587888 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@967 -- # kill 1587888 00:33:25.232 Received shutdown signal, test time was about 1.000000 seconds 00:33:25.232 00:33:25.232 Latency(us) 00:33:25.232 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:25.232 =================================================================================================================== 00:33:25.232 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:25.232 20:47:17 keyring_file -- common/autotest_common.sh@972 -- # wait 1587888 00:33:25.493 20:47:17 keyring_file -- keyring/file.sh@117 -- # bperfpid=1589381 00:33:25.493 20:47:17 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1589381 /var/tmp/bperf.sock 00:33:25.493 20:47:17 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1589381 ']' 00:33:25.493 20:47:17 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:25.493 20:47:17 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:25.493 20:47:17 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:25.493 20:47:17 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:25.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:25.493 20:47:17 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:25.493 20:47:17 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:33:25.493 "subsystems": [ 00:33:25.493 { 00:33:25.493 "subsystem": "keyring", 00:33:25.493 "config": [ 00:33:25.493 { 00:33:25.493 "method": "keyring_file_add_key", 00:33:25.493 "params": { 00:33:25.493 "name": "key0", 00:33:25.493 "path": "/tmp/tmp.memcDjuZwv" 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "keyring_file_add_key", 00:33:25.493 "params": { 00:33:25.493 "name": "key1", 00:33:25.493 "path": "/tmp/tmp.qRtnUkkZuS" 00:33:25.493 } 00:33:25.493 } 00:33:25.493 ] 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "subsystem": "iobuf", 00:33:25.493 "config": [ 00:33:25.493 { 00:33:25.493 "method": "iobuf_set_options", 00:33:25.493 "params": { 00:33:25.493 "small_pool_count": 8192, 00:33:25.493 "large_pool_count": 1024, 00:33:25.493 "small_bufsize": 8192, 00:33:25.493 "large_bufsize": 135168 00:33:25.493 } 00:33:25.493 } 00:33:25.493 ] 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "subsystem": "sock", 00:33:25.493 "config": [ 00:33:25.493 { 00:33:25.493 "method": "sock_set_default_impl", 00:33:25.493 "params": { 00:33:25.493 "impl_name": "posix" 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "sock_impl_set_options", 00:33:25.493 "params": { 00:33:25.493 "impl_name": "ssl", 00:33:25.493 "recv_buf_size": 4096, 00:33:25.493 "send_buf_size": 4096, 00:33:25.493 "enable_recv_pipe": true, 00:33:25.493 "enable_quickack": false, 00:33:25.493 "enable_placement_id": 0, 00:33:25.493 "enable_zerocopy_send_server": true, 00:33:25.493 "enable_zerocopy_send_client": false, 00:33:25.493 "zerocopy_threshold": 0, 00:33:25.493 "tls_version": 0, 00:33:25.493 "enable_ktls": false 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "sock_impl_set_options", 00:33:25.493 "params": { 00:33:25.493 "impl_name": "posix", 00:33:25.493 "recv_buf_size": 2097152, 00:33:25.493 "send_buf_size": 2097152, 00:33:25.493 "enable_recv_pipe": true, 00:33:25.493 "enable_quickack": false, 00:33:25.493 "enable_placement_id": 0, 00:33:25.493 "enable_zerocopy_send_server": true, 00:33:25.493 "enable_zerocopy_send_client": false, 00:33:25.493 "zerocopy_threshold": 0, 00:33:25.493 "tls_version": 0, 00:33:25.493 "enable_ktls": false 00:33:25.493 } 00:33:25.493 } 00:33:25.493 ] 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "subsystem": "vmd", 00:33:25.493 "config": [] 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "subsystem": "accel", 00:33:25.493 "config": [ 00:33:25.493 { 00:33:25.493 "method": "accel_set_options", 00:33:25.493 "params": { 00:33:25.493 "small_cache_size": 128, 00:33:25.493 "large_cache_size": 16, 00:33:25.493 "task_count": 2048, 00:33:25.493 "sequence_count": 2048, 00:33:25.493 "buf_count": 2048 00:33:25.493 } 00:33:25.493 } 00:33:25.493 ] 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "subsystem": "bdev", 00:33:25.493 "config": [ 00:33:25.493 { 00:33:25.493 "method": "bdev_set_options", 00:33:25.493 "params": { 00:33:25.493 "bdev_io_pool_size": 65535, 00:33:25.493 "bdev_io_cache_size": 256, 00:33:25.493 "bdev_auto_examine": true, 00:33:25.493 "iobuf_small_cache_size": 128, 00:33:25.493 "iobuf_large_cache_size": 16 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "bdev_raid_set_options", 00:33:25.493 "params": { 00:33:25.493 "process_window_size_kb": 1024 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "bdev_iscsi_set_options", 00:33:25.493 "params": { 00:33:25.493 "timeout_sec": 30 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "bdev_nvme_set_options", 00:33:25.493 "params": { 00:33:25.493 "action_on_timeout": "none", 00:33:25.493 "timeout_us": 0, 00:33:25.493 "timeout_admin_us": 0, 00:33:25.493 "keep_alive_timeout_ms": 10000, 00:33:25.493 "arbitration_burst": 0, 00:33:25.493 "low_priority_weight": 0, 00:33:25.493 "medium_priority_weight": 0, 00:33:25.493 "high_priority_weight": 0, 00:33:25.493 "nvme_adminq_poll_period_us": 10000, 00:33:25.493 "nvme_ioq_poll_period_us": 0, 00:33:25.493 "io_queue_requests": 512, 00:33:25.493 "delay_cmd_submit": true, 00:33:25.493 "transport_retry_count": 4, 00:33:25.493 "bdev_retry_count": 3, 00:33:25.493 "transport_ack_timeout": 0, 00:33:25.493 "ctrlr_loss_timeout_sec": 0, 00:33:25.493 "reconnect_delay_sec": 0, 00:33:25.493 "fast_io_fail_timeout_sec": 0, 00:33:25.493 "disable_auto_failback": false, 00:33:25.493 "generate_uuids": false, 00:33:25.493 "transport_tos": 0, 00:33:25.493 "nvme_error_stat": false, 00:33:25.493 "rdma_srq_size": 0, 00:33:25.493 "io_path_stat": false, 00:33:25.493 "allow_accel_sequence": false, 00:33:25.493 "rdma_max_cq_size": 0, 00:33:25.493 "rdma_cm_event_timeout_ms": 0, 00:33:25.493 "dhchap_digests": [ 00:33:25.493 "sha256", 00:33:25.493 "sha384", 00:33:25.493 "sha512" 00:33:25.493 ], 00:33:25.493 "dhchap_dhgroups": [ 00:33:25.493 "null", 00:33:25.493 "ffdhe2048", 00:33:25.493 "ffdhe3072", 00:33:25.493 "ffdhe4096", 00:33:25.493 "ffdhe6144", 00:33:25.493 "ffdhe8192" 00:33:25.493 ] 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "bdev_nvme_attach_controller", 00:33:25.493 "params": { 00:33:25.493 "name": "nvme0", 00:33:25.493 "trtype": "TCP", 00:33:25.493 "adrfam": "IPv4", 00:33:25.493 "traddr": "127.0.0.1", 00:33:25.493 "trsvcid": "4420", 00:33:25.493 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:25.493 "prchk_reftag": false, 00:33:25.493 "prchk_guard": false, 00:33:25.493 "ctrlr_loss_timeout_sec": 0, 00:33:25.493 "reconnect_delay_sec": 0, 00:33:25.493 "fast_io_fail_timeout_sec": 0, 00:33:25.493 "psk": "key0", 00:33:25.493 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:25.493 "hdgst": false, 00:33:25.493 "ddgst": false 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.493 "method": "bdev_nvme_set_hotplug", 00:33:25.493 "params": { 00:33:25.493 "period_us": 100000, 00:33:25.493 "enable": false 00:33:25.493 } 00:33:25.493 }, 00:33:25.493 { 00:33:25.494 "method": "bdev_wait_for_examine" 00:33:25.494 } 00:33:25.494 ] 00:33:25.494 }, 00:33:25.494 { 00:33:25.494 "subsystem": "nbd", 00:33:25.494 "config": [] 00:33:25.494 } 00:33:25.494 ] 00:33:25.494 }' 00:33:25.494 20:47:17 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:25.494 [2024-07-15 20:47:17.684300] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:33:25.494 [2024-07-15 20:47:17.684355] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589381 ] 00:33:25.494 EAL: No free 2048 kB hugepages reported on node 1 00:33:25.494 [2024-07-15 20:47:17.767172] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:25.494 [2024-07-15 20:47:17.820557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:25.754 [2024-07-15 20:47:17.962604] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:26.323 20:47:18 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:26.323 20:47:18 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:33:26.323 20:47:18 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:26.323 20:47:18 keyring_file -- keyring/file.sh@120 -- # jq length 00:33:26.323 20:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.323 20:47:18 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:26.323 20:47:18 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:33:26.323 20:47:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:33:26.323 20:47:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:26.323 20:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.323 20:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:26.323 20:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.583 20:47:18 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:26.583 20:47:18 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:33:26.583 20:47:18 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:33:26.583 20:47:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:26.583 20:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:26.583 20:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.583 20:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:26.583 20:47:18 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:26.583 20:47:18 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:26.583 20:47:18 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:26.583 20:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:26.841 20:47:19 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:26.841 20:47:19 keyring_file -- keyring/file.sh@1 -- # cleanup 00:33:26.841 20:47:19 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.memcDjuZwv /tmp/tmp.qRtnUkkZuS 00:33:26.841 20:47:19 keyring_file -- keyring/file.sh@20 -- # killprocess 1589381 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1589381 ']' 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1589381 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1589381 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1589381' 00:33:26.841 killing process with pid 1589381 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@967 -- # kill 1589381 00:33:26.841 Received shutdown signal, test time was about 1.000000 seconds 00:33:26.841 00:33:26.841 Latency(us) 00:33:26.841 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:26.841 =================================================================================================================== 00:33:26.841 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:26.841 20:47:19 keyring_file -- common/autotest_common.sh@972 -- # wait 1589381 00:33:27.100 20:47:19 keyring_file -- keyring/file.sh@21 -- # killprocess 1587675 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1587675 ']' 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1587675 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@953 -- # uname 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1587675 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1587675' 00:33:27.100 killing process with pid 1587675 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@967 -- # kill 1587675 00:33:27.100 [2024-07-15 20:47:19.312266] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:27.100 20:47:19 keyring_file -- common/autotest_common.sh@972 -- # wait 1587675 00:33:27.359 00:33:27.359 real 0m10.875s 00:33:27.359 user 0m25.533s 00:33:27.359 sys 0m2.699s 00:33:27.359 20:47:19 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:27.359 20:47:19 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:33:27.359 ************************************ 00:33:27.359 END TEST keyring_file 00:33:27.359 ************************************ 00:33:27.359 20:47:19 -- common/autotest_common.sh@1142 -- # return 0 00:33:27.359 20:47:19 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:33:27.359 20:47:19 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:27.359 20:47:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:27.359 20:47:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:27.359 20:47:19 -- common/autotest_common.sh@10 -- # set +x 00:33:27.359 ************************************ 00:33:27.359 START TEST keyring_linux 00:33:27.359 ************************************ 00:33:27.359 20:47:19 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:33:27.359 * Looking for test storage... 00:33:27.359 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:33:27.359 20:47:19 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:33:27.359 20:47:19 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=00539ede-7deb-ec11-9bc7-a4bf01928396 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.359 20:47:19 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.359 20:47:19 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.359 20:47:19 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.359 20:47:19 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.359 20:47:19 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.359 20:47:19 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.359 20:47:19 keyring_linux -- paths/export.sh@5 -- # export PATH 00:33:27.359 20:47:19 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:27.359 20:47:19 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:27.359 20:47:19 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:27.359 20:47:19 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:27.359 20:47:19 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:27.359 20:47:19 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:33:27.359 20:47:19 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:33:27.359 20:47:19 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:33:27.620 20:47:19 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:33:27.620 /tmp/:spdk-test:key0 00:33:27.620 20:47:19 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:33:27.620 20:47:19 keyring_linux -- nvmf/common.sh@705 -- # python - 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:33:27.620 20:47:19 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:33:27.620 /tmp/:spdk-test:key1 00:33:27.620 20:47:19 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1589951 00:33:27.620 20:47:19 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1589951 00:33:27.620 20:47:19 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:33:27.620 20:47:19 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1589951 ']' 00:33:27.620 20:47:19 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:27.620 20:47:19 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:27.620 20:47:19 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:27.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:27.620 20:47:19 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:27.620 20:47:19 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:27.620 [2024-07-15 20:47:19.892675] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:33:27.620 [2024-07-15 20:47:19.892750] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1589951 ] 00:33:27.620 EAL: No free 2048 kB hugepages reported on node 1 00:33:27.620 [2024-07-15 20:47:19.963787] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.879 [2024-07-15 20:47:20.041424] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:28.448 20:47:20 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:28.448 [2024-07-15 20:47:20.662119] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.448 null0 00:33:28.448 [2024-07-15 20:47:20.694161] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:28.448 [2024-07-15 20:47:20.694545] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.448 20:47:20 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:33:28.448 769270886 00:33:28.448 20:47:20 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:33:28.448 474529565 00:33:28.448 20:47:20 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1590139 00:33:28.448 20:47:20 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1590139 /var/tmp/bperf.sock 00:33:28.448 20:47:20 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1590139 ']' 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:28.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:28.448 20:47:20 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:28.448 [2024-07-15 20:47:20.779392] Starting SPDK v24.09-pre git sha1 6c0846996 / DPDK 24.03.0 initialization... 00:33:28.448 [2024-07-15 20:47:20.779440] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1590139 ] 00:33:28.448 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.720 [2024-07-15 20:47:20.857482] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.720 [2024-07-15 20:47:20.911508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.287 20:47:21 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:29.287 20:47:21 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:33:29.287 20:47:21 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:33:29.287 20:47:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:33:29.547 20:47:21 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:33:29.547 20:47:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:29.547 20:47:21 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:29.547 20:47:21 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:33:29.805 [2024-07-15 20:47:22.018567] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:29.805 nvme0n1 00:33:29.805 20:47:22 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:33:29.805 20:47:22 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:33:29.805 20:47:22 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:29.805 20:47:22 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:29.805 20:47:22 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:29.805 20:47:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:33:30.064 20:47:22 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:30.064 20:47:22 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:33:30.064 20:47:22 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@25 -- # sn=769270886 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@26 -- # [[ 769270886 == \7\6\9\2\7\0\8\8\6 ]] 00:33:30.064 20:47:22 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 769270886 00:33:30.324 20:47:22 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:33:30.324 20:47:22 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:30.324 Running I/O for 1 seconds... 00:33:31.263 00:33:31.263 Latency(us) 00:33:31.263 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.263 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:31.263 nvme0n1 : 1.01 11208.30 43.78 0.00 0.00 11349.73 7809.71 17148.59 00:33:31.263 =================================================================================================================== 00:33:31.263 Total : 11208.30 43.78 0.00 0.00 11349.73 7809.71 17148.59 00:33:31.263 0 00:33:31.263 20:47:23 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:31.263 20:47:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:33:31.524 20:47:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@23 -- # return 00:33:31.524 20:47:23 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:31.524 20:47:23 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:33:31.524 20:47:23 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:31.524 20:47:23 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:33:31.524 20:47:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.524 20:47:23 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:33:31.524 20:47:23 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:31.524 20:47:23 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:31.524 20:47:23 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:33:31.784 [2024-07-15 20:47:24.036553] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:31.784 [2024-07-15 20:47:24.037301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb000 (107): Transport endpoint is not connected 00:33:31.784 [2024-07-15 20:47:24.038297] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18eb000 (9): Bad file descriptor 00:33:31.784 [2024-07-15 20:47:24.039298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:31.784 [2024-07-15 20:47:24.039304] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:31.784 [2024-07-15 20:47:24.039310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:31.784 request: 00:33:31.784 { 00:33:31.784 "name": "nvme0", 00:33:31.784 "trtype": "tcp", 00:33:31.784 "traddr": "127.0.0.1", 00:33:31.784 "adrfam": "ipv4", 00:33:31.784 "trsvcid": "4420", 00:33:31.784 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:31.784 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:31.784 "prchk_reftag": false, 00:33:31.784 "prchk_guard": false, 00:33:31.784 "hdgst": false, 00:33:31.784 "ddgst": false, 00:33:31.784 "psk": ":spdk-test:key1", 00:33:31.784 "method": "bdev_nvme_attach_controller", 00:33:31.784 "req_id": 1 00:33:31.784 } 00:33:31.784 Got JSON-RPC error response 00:33:31.784 response: 00:33:31.784 { 00:33:31.784 "code": -5, 00:33:31.784 "message": "Input/output error" 00:33:31.784 } 00:33:31.784 20:47:24 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:33:31.784 20:47:24 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:31.784 20:47:24 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:31.784 20:47:24 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@33 -- # sn=769270886 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 769270886 00:33:31.784 1 links removed 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:33:31.784 20:47:24 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:33:31.785 20:47:24 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:33:31.785 20:47:24 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:33:31.785 20:47:24 keyring_linux -- keyring/linux.sh@33 -- # sn=474529565 00:33:31.785 20:47:24 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 474529565 00:33:31.785 1 links removed 00:33:31.785 20:47:24 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1590139 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1590139 ']' 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1590139 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1590139 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1590139' 00:33:31.785 killing process with pid 1590139 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 1590139 00:33:31.785 Received shutdown signal, test time was about 1.000000 seconds 00:33:31.785 00:33:31.785 Latency(us) 00:33:31.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.785 =================================================================================================================== 00:33:31.785 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:31.785 20:47:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 1590139 00:33:32.044 20:47:24 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1589951 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1589951 ']' 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1589951 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1589951 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1589951' 00:33:32.044 killing process with pid 1589951 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@967 -- # kill 1589951 00:33:32.044 20:47:24 keyring_linux -- common/autotest_common.sh@972 -- # wait 1589951 00:33:32.306 00:33:32.306 real 0m4.902s 00:33:32.306 user 0m8.440s 00:33:32.306 sys 0m1.511s 00:33:32.306 20:47:24 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:32.306 20:47:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:33:32.306 ************************************ 00:33:32.306 END TEST keyring_linux 00:33:32.306 ************************************ 00:33:32.306 20:47:24 -- common/autotest_common.sh@1142 -- # return 0 00:33:32.306 20:47:24 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:33:32.306 20:47:24 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:33:32.306 20:47:24 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:33:32.306 20:47:24 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:33:32.306 20:47:24 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:33:32.306 20:47:24 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:33:32.306 20:47:24 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:33:32.306 20:47:24 -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:32.306 20:47:24 -- common/autotest_common.sh@10 -- # set +x 00:33:32.306 20:47:24 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:33:32.306 20:47:24 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:33:32.306 20:47:24 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:33:32.306 20:47:24 -- common/autotest_common.sh@10 -- # set +x 00:33:40.445 INFO: APP EXITING 00:33:40.445 INFO: killing all VMs 00:33:40.445 INFO: killing vhost app 00:33:40.445 INFO: EXIT DONE 00:33:43.744 0000:80:01.6 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:80:01.7 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:80:01.4 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:80:01.5 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:80:01.2 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:80:01.3 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:80:01.0 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:80:01.1 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:65:00.0 (144d a80a): Already using the nvme driver 00:33:43.744 0000:00:01.6 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:00:01.7 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:00:01.4 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:00:01.5 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:00:01.2 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:00:01.3 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:00:01.0 (8086 0b00): Already using the ioatdma driver 00:33:43.744 0000:00:01.1 (8086 0b00): Already using the ioatdma driver 00:33:47.935 Cleaning 00:33:47.935 Removing: /var/run/dpdk/spdk0/config 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:33:47.935 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:47.935 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:47.935 Removing: /var/run/dpdk/spdk1/config 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:33:47.935 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:47.936 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:47.936 Removing: /var/run/dpdk/spdk1/mp_socket 00:33:47.936 Removing: /var/run/dpdk/spdk2/config 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:33:47.936 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:47.936 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:47.936 Removing: /var/run/dpdk/spdk3/config 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:33:47.936 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:47.936 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:47.936 Removing: /var/run/dpdk/spdk4/config 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:33:47.936 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:47.936 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:47.936 Removing: /dev/shm/bdev_svc_trace.1 00:33:47.936 Removing: /dev/shm/nvmf_trace.0 00:33:47.936 Removing: /dev/shm/spdk_tgt_trace.pid1105156 00:33:47.936 Removing: /var/run/dpdk/spdk0 00:33:47.936 Removing: /var/run/dpdk/spdk1 00:33:47.936 Removing: /var/run/dpdk/spdk2 00:33:47.936 Removing: /var/run/dpdk/spdk3 00:33:47.936 Removing: /var/run/dpdk/spdk4 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1103653 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1105156 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1105726 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1106955 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1107078 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1108383 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1108452 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1108869 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1109726 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1110481 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1110867 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1111225 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1111335 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1111721 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1112083 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1112373 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1112585 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1113877 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1117132 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1117491 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1117859 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1117891 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1118484 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1118582 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1118982 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1119288 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1119517 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1119669 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1119968 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1120034 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1120475 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1120829 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1121221 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1121566 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1121630 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1121692 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1122050 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1122397 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1122715 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1122903 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1123138 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1123485 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1123840 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1124190 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1124388 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1124590 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1124930 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1125278 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1125631 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1125834 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1126038 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1126372 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1126722 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1127074 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1127359 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1127578 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1127894 00:33:47.936 Removing: /var/run/dpdk/spdk_pid1128364 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1133626 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1192233 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1197854 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1210000 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1217073 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1222445 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1223121 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1230871 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1238537 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1238543 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1239554 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1240560 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1241673 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1242343 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1242347 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1242981 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1243066 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1243255 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1244374 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1245410 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1246478 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1247129 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1247155 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1247490 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1248753 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1250004 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1260690 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1261043 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1266697 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1274129 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1277110 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1290531 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1302737 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1304744 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1305876 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1327740 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1332812 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1364703 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1370541 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1372448 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1374560 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1374904 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1375092 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1375262 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1375972 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1378003 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1379071 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1379450 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1382113 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1382856 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1383571 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1389181 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1402733 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1407433 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1415195 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1416463 00:33:48.196 Removing: /var/run/dpdk/spdk_pid1418208 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1423963 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1429339 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1439180 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1439280 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1444855 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1445204 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1445512 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1445967 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1445972 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1452469 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1453042 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1458822 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1462168 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1469051 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1476157 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1486499 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1495566 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1495597 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1520023 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1520703 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1521392 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1522218 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1523219 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1523979 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1524739 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1525505 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1530903 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1531240 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1538951 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1539109 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1541840 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1549452 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1549547 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1556635 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1559047 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1561256 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1562778 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1565116 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1566506 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1577353 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1578006 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1578541 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1581433 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1582106 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1582749 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1587675 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1587888 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1589381 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1589951 00:33:48.456 Removing: /var/run/dpdk/spdk_pid1590139 00:33:48.456 Clean 00:33:48.716 20:47:40 -- common/autotest_common.sh@1451 -- # return 0 00:33:48.716 20:47:40 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:33:48.716 20:47:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:48.716 20:47:40 -- common/autotest_common.sh@10 -- # set +x 00:33:48.716 20:47:40 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:33:48.716 20:47:40 -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:48.716 20:47:40 -- common/autotest_common.sh@10 -- # set +x 00:33:48.716 20:47:40 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:33:48.716 20:47:40 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:33:48.716 20:47:40 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:33:48.716 20:47:40 -- spdk/autotest.sh@391 -- # hash lcov 00:33:48.716 20:47:40 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:48.716 20:47:40 -- spdk/autotest.sh@393 -- # hostname 00:33:48.716 20:47:40 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-cyp-12 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:33:48.976 geninfo: WARNING: invalid characters removed from testname! 00:34:15.600 20:48:05 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:15.932 20:48:08 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:17.867 20:48:10 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:19.779 20:48:11 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:21.159 20:48:13 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:22.635 20:48:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:34:24.547 20:48:16 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:24.547 20:48:16 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:24.547 20:48:16 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:24.547 20:48:16 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:24.547 20:48:16 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:24.547 20:48:16 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.547 20:48:16 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.547 20:48:16 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.547 20:48:16 -- paths/export.sh@5 -- $ export PATH 00:34:24.547 20:48:16 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:24.547 20:48:16 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:34:24.547 20:48:16 -- common/autobuild_common.sh@444 -- $ date +%s 00:34:24.547 20:48:16 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1721069296.XXXXXX 00:34:24.547 20:48:16 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1721069296.4m8Qb6 00:34:24.547 20:48:16 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:34:24.547 20:48:16 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:34:24.547 20:48:16 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:34:24.547 20:48:16 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:34:24.547 20:48:16 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:34:24.547 20:48:16 -- common/autobuild_common.sh@460 -- $ get_config_params 00:34:24.547 20:48:16 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:34:24.547 20:48:16 -- common/autotest_common.sh@10 -- $ set +x 00:34:24.547 20:48:16 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:34:24.547 20:48:16 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:34:24.547 20:48:16 -- pm/common@17 -- $ local monitor 00:34:24.547 20:48:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:24.547 20:48:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:24.547 20:48:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:24.547 20:48:16 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:24.547 20:48:16 -- pm/common@21 -- $ date +%s 00:34:24.547 20:48:16 -- pm/common@25 -- $ sleep 1 00:34:24.547 20:48:16 -- pm/common@21 -- $ date +%s 00:34:24.547 20:48:16 -- pm/common@21 -- $ date +%s 00:34:24.547 20:48:16 -- pm/common@21 -- $ date +%s 00:34:24.547 20:48:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069296 00:34:24.547 20:48:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069296 00:34:24.547 20:48:16 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069296 00:34:24.547 20:48:16 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721069296 00:34:24.547 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069296_collect-vmstat.pm.log 00:34:24.547 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069296_collect-cpu-load.pm.log 00:34:24.548 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069296_collect-cpu-temp.pm.log 00:34:24.548 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721069296_collect-bmc-pm.bmc.pm.log 00:34:25.488 20:48:17 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:34:25.488 20:48:17 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j144 00:34:25.488 20:48:17 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:25.488 20:48:17 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:25.488 20:48:17 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:25.488 20:48:17 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:25.488 20:48:17 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:25.488 20:48:17 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:25.488 20:48:17 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:34:25.488 20:48:17 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:25.488 20:48:17 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:25.488 20:48:17 -- pm/common@29 -- $ signal_monitor_resources TERM 00:34:25.488 20:48:17 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:34:25.488 20:48:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:25.488 20:48:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:34:25.488 20:48:17 -- pm/common@44 -- $ pid=1603469 00:34:25.488 20:48:17 -- pm/common@50 -- $ kill -TERM 1603469 00:34:25.488 20:48:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:25.488 20:48:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:34:25.488 20:48:17 -- pm/common@44 -- $ pid=1603470 00:34:25.488 20:48:17 -- pm/common@50 -- $ kill -TERM 1603470 00:34:25.488 20:48:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:25.488 20:48:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:34:25.488 20:48:17 -- pm/common@44 -- $ pid=1603472 00:34:25.488 20:48:17 -- pm/common@50 -- $ kill -TERM 1603472 00:34:25.488 20:48:17 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:25.488 20:48:17 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:34:25.488 20:48:17 -- pm/common@44 -- $ pid=1603495 00:34:25.488 20:48:17 -- pm/common@50 -- $ sudo -E kill -TERM 1603495 00:34:25.488 + [[ -n 979267 ]] 00:34:25.488 + sudo kill 979267 00:34:25.499 [Pipeline] } 00:34:25.517 [Pipeline] // stage 00:34:25.522 [Pipeline] } 00:34:25.539 [Pipeline] // timeout 00:34:25.544 [Pipeline] } 00:34:25.561 [Pipeline] // catchError 00:34:25.567 [Pipeline] } 00:34:25.585 [Pipeline] // wrap 00:34:25.591 [Pipeline] } 00:34:25.607 [Pipeline] // catchError 00:34:25.617 [Pipeline] stage 00:34:25.620 [Pipeline] { (Epilogue) 00:34:25.636 [Pipeline] catchError 00:34:25.638 [Pipeline] { 00:34:25.656 [Pipeline] echo 00:34:25.658 Cleanup processes 00:34:25.666 [Pipeline] sh 00:34:25.957 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:25.957 1603574 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:34:25.957 1604016 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:25.973 [Pipeline] sh 00:34:26.260 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:34:26.260 ++ grep -v 'sudo pgrep' 00:34:26.260 ++ awk '{print $1}' 00:34:26.260 + sudo kill -9 1603574 00:34:26.273 [Pipeline] sh 00:34:26.559 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:38.797 [Pipeline] sh 00:34:39.083 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:39.083 Artifacts sizes are good 00:34:39.097 [Pipeline] archiveArtifacts 00:34:39.104 Archiving artifacts 00:34:39.289 [Pipeline] sh 00:34:39.573 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:34:39.588 [Pipeline] cleanWs 00:34:39.599 [WS-CLEANUP] Deleting project workspace... 00:34:39.599 [WS-CLEANUP] Deferred wipeout is used... 00:34:39.606 [WS-CLEANUP] done 00:34:39.608 [Pipeline] } 00:34:39.628 [Pipeline] // catchError 00:34:39.641 [Pipeline] sh 00:34:39.925 + logger -p user.info -t JENKINS-CI 00:34:39.935 [Pipeline] } 00:34:39.950 [Pipeline] // stage 00:34:39.954 [Pipeline] } 00:34:39.971 [Pipeline] // node 00:34:39.975 [Pipeline] End of Pipeline 00:34:40.066 Finished: SUCCESS